Test Report: KVM_Linux_crio 19531

                    
                      cca1ca437c91fbc205ce13fbbdef95295053f0ce:2024-08-29:35997
                    
                

Test fail (30/312)

Order failed test Duration
33 TestAddons/parallel/Registry 74.33
34 TestAddons/parallel/Ingress 152.94
36 TestAddons/parallel/MetricsServer 303.27
164 TestMultiControlPlane/serial/StopSecondaryNode 141.8
166 TestMultiControlPlane/serial/RestartSecondaryNode 55.85
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 365.05
171 TestMultiControlPlane/serial/StopCluster 141.69
231 TestMultiNode/serial/RestartKeepsNodes 330.05
233 TestMultiNode/serial/StopMultiNode 141.28
240 TestPreload 270.97
248 TestKubernetesUpgrade 430.44
284 TestPause/serial/SecondStartNoReconfiguration 40.2
319 TestStartStop/group/old-k8s-version/serial/FirstStart 283.34
339 TestStartStop/group/no-preload/serial/Stop 139.06
342 TestStartStop/group/embed-certs/serial/Stop 139.13
345 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.92
346 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
348 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
349 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
351 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 106.99
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/SecondStart 702.75
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.07
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.07
359 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.08
360 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.36
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 437
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 465.29
363 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 330.75
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 163.94
x
+
TestAddons/parallel/Registry (74.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.970327ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-25kkf" [cc4a9ea4-4575-4df4-a260-191792ddc309] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003083055s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xqhqg" [dae462a3-dc8d-436d-8360-ee8d164ab845] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00370537s
addons_test.go:342: (dbg) Run:  kubectl --context addons-647117 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-647117 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-647117 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.07668068s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-647117 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 ip
2024/08/29 18:17:40 [DEBUG] GET http://192.168.39.43:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-647117 -n addons-647117
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-647117 logs -n 25: (1.628097497s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-366415 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-366415                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-366415                                                                     | download-only-366415 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | -o=json --download-only                                                                     | download-only-105926 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-105926                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| delete  | -p download-only-105926                                                                     | download-only-105926 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| delete  | -p download-only-366415                                                                     | download-only-366415 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| delete  | -p download-only-105926                                                                     | download-only-105926 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-728877 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | binary-mirror-728877                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38491                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-728877                                                                     | binary-mirror-728877 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | addons-647117                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | addons-647117                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-647117 --wait=true                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-647117 ssh cat                                                                       | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | /opt/local-path-provisioner/pvc-802ad026-bf20-44ed-8a63-3b8e6e455a85_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:17 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-647117 addons                                                                        | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-647117 addons                                                                        | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | -p addons-647117                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | addons-647117                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | -p addons-647117                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-647117 ip                                                                            | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:06:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:06:13.977708   21003 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:06:13.977815   21003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:06:13.977823   21003 out.go:358] Setting ErrFile to fd 2...
	I0829 18:06:13.977827   21003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:06:13.977999   21003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:06:13.978601   21003 out.go:352] Setting JSON to false
	I0829 18:06:13.979455   21003 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2921,"bootTime":1724951853,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:06:13.979510   21003 start.go:139] virtualization: kvm guest
	I0829 18:06:14.042675   21003 out.go:177] * [addons-647117] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:06:14.104740   21003 notify.go:220] Checking for updates...
	I0829 18:06:14.167604   21003 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:06:14.229702   21003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:06:14.294106   21003 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:06:14.342682   21003 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:06:14.344101   21003 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:06:14.345367   21003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:06:14.346953   21003 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:06:14.377848   21003 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 18:06:14.379196   21003 start.go:297] selected driver: kvm2
	I0829 18:06:14.379209   21003 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:06:14.379220   21003 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:06:14.379903   21003 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:06:14.379987   21003 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:06:14.395270   21003 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:06:14.395314   21003 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:06:14.395519   21003 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:06:14.395554   21003 cni.go:84] Creating CNI manager for ""
	I0829 18:06:14.395565   21003 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:06:14.395574   21003 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:06:14.395622   21003 start.go:340] cluster config:
	{Name:addons-647117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:14.395709   21003 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:06:14.397385   21003 out.go:177] * Starting "addons-647117" primary control-plane node in "addons-647117" cluster
	I0829 18:06:14.398568   21003 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:06:14.398598   21003 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:06:14.398606   21003 cache.go:56] Caching tarball of preloaded images
	I0829 18:06:14.398682   21003 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:06:14.398692   21003 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:06:14.398994   21003 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/config.json ...
	I0829 18:06:14.399012   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/config.json: {Name:mkcc99c38dc1733f24d9d95208d6cd89ecd08f71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:14.399129   21003 start.go:360] acquireMachinesLock for addons-647117: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:06:14.399169   21003 start.go:364] duration metric: took 27.979µs to acquireMachinesLock for "addons-647117"
	I0829 18:06:14.399185   21003 start.go:93] Provisioning new machine with config: &{Name:addons-647117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:06:14.399236   21003 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 18:06:14.400651   21003 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0829 18:06:14.400800   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:14.400842   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:14.414391   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I0829 18:06:14.414771   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:14.415264   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:14.415277   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:14.415573   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:14.415698   21003 main.go:141] libmachine: (addons-647117) Calling .GetMachineName
	I0829 18:06:14.415826   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:14.415924   21003 start.go:159] libmachine.API.Create for "addons-647117" (driver="kvm2")
	I0829 18:06:14.415948   21003 client.go:168] LocalClient.Create starting
	I0829 18:06:14.415980   21003 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem
	I0829 18:06:14.569250   21003 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem
	I0829 18:06:14.895450   21003 main.go:141] libmachine: Running pre-create checks...
	I0829 18:06:14.895478   21003 main.go:141] libmachine: (addons-647117) Calling .PreCreateCheck
	I0829 18:06:14.896002   21003 main.go:141] libmachine: (addons-647117) Calling .GetConfigRaw
	I0829 18:06:14.896427   21003 main.go:141] libmachine: Creating machine...
	I0829 18:06:14.896441   21003 main.go:141] libmachine: (addons-647117) Calling .Create
	I0829 18:06:14.896565   21003 main.go:141] libmachine: (addons-647117) Creating KVM machine...
	I0829 18:06:14.897900   21003 main.go:141] libmachine: (addons-647117) DBG | found existing default KVM network
	I0829 18:06:14.898643   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:14.898505   21025 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1f0}
	I0829 18:06:14.898675   21003 main.go:141] libmachine: (addons-647117) DBG | created network xml: 
	I0829 18:06:14.898690   21003 main.go:141] libmachine: (addons-647117) DBG | <network>
	I0829 18:06:14.898701   21003 main.go:141] libmachine: (addons-647117) DBG |   <name>mk-addons-647117</name>
	I0829 18:06:14.898712   21003 main.go:141] libmachine: (addons-647117) DBG |   <dns enable='no'/>
	I0829 18:06:14.898720   21003 main.go:141] libmachine: (addons-647117) DBG |   
	I0829 18:06:14.898727   21003 main.go:141] libmachine: (addons-647117) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 18:06:14.898734   21003 main.go:141] libmachine: (addons-647117) DBG |     <dhcp>
	I0829 18:06:14.898743   21003 main.go:141] libmachine: (addons-647117) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 18:06:14.898752   21003 main.go:141] libmachine: (addons-647117) DBG |     </dhcp>
	I0829 18:06:14.898766   21003 main.go:141] libmachine: (addons-647117) DBG |   </ip>
	I0829 18:06:14.898775   21003 main.go:141] libmachine: (addons-647117) DBG |   
	I0829 18:06:14.898785   21003 main.go:141] libmachine: (addons-647117) DBG | </network>
	I0829 18:06:14.898795   21003 main.go:141] libmachine: (addons-647117) DBG | 
	I0829 18:06:14.904085   21003 main.go:141] libmachine: (addons-647117) DBG | trying to create private KVM network mk-addons-647117 192.168.39.0/24...
	I0829 18:06:14.968799   21003 main.go:141] libmachine: (addons-647117) DBG | private KVM network mk-addons-647117 192.168.39.0/24 created
	I0829 18:06:14.968849   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:14.968765   21025 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:06:14.968877   21003 main.go:141] libmachine: (addons-647117) Setting up store path in /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117 ...
	I0829 18:06:14.968903   21003 main.go:141] libmachine: (addons-647117) Building disk image from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 18:06:14.968915   21003 main.go:141] libmachine: (addons-647117) Downloading /home/jenkins/minikube-integration/19531-13056/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 18:06:15.221752   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:15.221579   21025 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa...
	I0829 18:06:15.315051   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:15.314930   21025 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/addons-647117.rawdisk...
	I0829 18:06:15.315079   21003 main.go:141] libmachine: (addons-647117) DBG | Writing magic tar header
	I0829 18:06:15.315090   21003 main.go:141] libmachine: (addons-647117) DBG | Writing SSH key tar header
	I0829 18:06:15.315098   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:15.315038   21025 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117 ...
	I0829 18:06:15.315184   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117
	I0829 18:06:15.315224   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines
	I0829 18:06:15.315248   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:06:15.315262   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117 (perms=drwx------)
	I0829 18:06:15.315273   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056
	I0829 18:06:15.315304   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:06:15.315312   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:06:15.315321   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:06:15.315328   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home
	I0829 18:06:15.315335   21003 main.go:141] libmachine: (addons-647117) DBG | Skipping /home - not owner
	I0829 18:06:15.315347   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube (perms=drwxr-xr-x)
	I0829 18:06:15.315365   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056 (perms=drwxrwxr-x)
	I0829 18:06:15.315380   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:06:15.315392   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:06:15.315402   21003 main.go:141] libmachine: (addons-647117) Creating domain...
	I0829 18:06:15.316378   21003 main.go:141] libmachine: (addons-647117) define libvirt domain using xml: 
	I0829 18:06:15.316405   21003 main.go:141] libmachine: (addons-647117) <domain type='kvm'>
	I0829 18:06:15.316415   21003 main.go:141] libmachine: (addons-647117)   <name>addons-647117</name>
	I0829 18:06:15.316423   21003 main.go:141] libmachine: (addons-647117)   <memory unit='MiB'>4000</memory>
	I0829 18:06:15.316431   21003 main.go:141] libmachine: (addons-647117)   <vcpu>2</vcpu>
	I0829 18:06:15.316442   21003 main.go:141] libmachine: (addons-647117)   <features>
	I0829 18:06:15.316449   21003 main.go:141] libmachine: (addons-647117)     <acpi/>
	I0829 18:06:15.316456   21003 main.go:141] libmachine: (addons-647117)     <apic/>
	I0829 18:06:15.316462   21003 main.go:141] libmachine: (addons-647117)     <pae/>
	I0829 18:06:15.316466   21003 main.go:141] libmachine: (addons-647117)     
	I0829 18:06:15.316471   21003 main.go:141] libmachine: (addons-647117)   </features>
	I0829 18:06:15.316478   21003 main.go:141] libmachine: (addons-647117)   <cpu mode='host-passthrough'>
	I0829 18:06:15.316485   21003 main.go:141] libmachine: (addons-647117)   
	I0829 18:06:15.316498   21003 main.go:141] libmachine: (addons-647117)   </cpu>
	I0829 18:06:15.316508   21003 main.go:141] libmachine: (addons-647117)   <os>
	I0829 18:06:15.316517   21003 main.go:141] libmachine: (addons-647117)     <type>hvm</type>
	I0829 18:06:15.316539   21003 main.go:141] libmachine: (addons-647117)     <boot dev='cdrom'/>
	I0829 18:06:15.316547   21003 main.go:141] libmachine: (addons-647117)     <boot dev='hd'/>
	I0829 18:06:15.316552   21003 main.go:141] libmachine: (addons-647117)     <bootmenu enable='no'/>
	I0829 18:06:15.316559   21003 main.go:141] libmachine: (addons-647117)   </os>
	I0829 18:06:15.316563   21003 main.go:141] libmachine: (addons-647117)   <devices>
	I0829 18:06:15.316572   21003 main.go:141] libmachine: (addons-647117)     <disk type='file' device='cdrom'>
	I0829 18:06:15.316581   21003 main.go:141] libmachine: (addons-647117)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/boot2docker.iso'/>
	I0829 18:06:15.316590   21003 main.go:141] libmachine: (addons-647117)       <target dev='hdc' bus='scsi'/>
	I0829 18:06:15.316595   21003 main.go:141] libmachine: (addons-647117)       <readonly/>
	I0829 18:06:15.316602   21003 main.go:141] libmachine: (addons-647117)     </disk>
	I0829 18:06:15.316607   21003 main.go:141] libmachine: (addons-647117)     <disk type='file' device='disk'>
	I0829 18:06:15.316626   21003 main.go:141] libmachine: (addons-647117)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:06:15.316642   21003 main.go:141] libmachine: (addons-647117)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/addons-647117.rawdisk'/>
	I0829 18:06:15.316654   21003 main.go:141] libmachine: (addons-647117)       <target dev='hda' bus='virtio'/>
	I0829 18:06:15.316661   21003 main.go:141] libmachine: (addons-647117)     </disk>
	I0829 18:06:15.316669   21003 main.go:141] libmachine: (addons-647117)     <interface type='network'>
	I0829 18:06:15.316676   21003 main.go:141] libmachine: (addons-647117)       <source network='mk-addons-647117'/>
	I0829 18:06:15.316682   21003 main.go:141] libmachine: (addons-647117)       <model type='virtio'/>
	I0829 18:06:15.316691   21003 main.go:141] libmachine: (addons-647117)     </interface>
	I0829 18:06:15.316697   21003 main.go:141] libmachine: (addons-647117)     <interface type='network'>
	I0829 18:06:15.316707   21003 main.go:141] libmachine: (addons-647117)       <source network='default'/>
	I0829 18:06:15.316722   21003 main.go:141] libmachine: (addons-647117)       <model type='virtio'/>
	I0829 18:06:15.316738   21003 main.go:141] libmachine: (addons-647117)     </interface>
	I0829 18:06:15.316747   21003 main.go:141] libmachine: (addons-647117)     <serial type='pty'>
	I0829 18:06:15.316759   21003 main.go:141] libmachine: (addons-647117)       <target port='0'/>
	I0829 18:06:15.316779   21003 main.go:141] libmachine: (addons-647117)     </serial>
	I0829 18:06:15.316794   21003 main.go:141] libmachine: (addons-647117)     <console type='pty'>
	I0829 18:06:15.316812   21003 main.go:141] libmachine: (addons-647117)       <target type='serial' port='0'/>
	I0829 18:06:15.316825   21003 main.go:141] libmachine: (addons-647117)     </console>
	I0829 18:06:15.316835   21003 main.go:141] libmachine: (addons-647117)     <rng model='virtio'>
	I0829 18:06:15.316848   21003 main.go:141] libmachine: (addons-647117)       <backend model='random'>/dev/random</backend>
	I0829 18:06:15.316855   21003 main.go:141] libmachine: (addons-647117)     </rng>
	I0829 18:06:15.316860   21003 main.go:141] libmachine: (addons-647117)     
	I0829 18:06:15.316866   21003 main.go:141] libmachine: (addons-647117)     
	I0829 18:06:15.316871   21003 main.go:141] libmachine: (addons-647117)   </devices>
	I0829 18:06:15.316880   21003 main.go:141] libmachine: (addons-647117) </domain>
	I0829 18:06:15.316887   21003 main.go:141] libmachine: (addons-647117) 
	I0829 18:06:15.323470   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:5e:cf:4e in network default
	I0829 18:06:15.324032   21003 main.go:141] libmachine: (addons-647117) Ensuring networks are active...
	I0829 18:06:15.324048   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:15.324701   21003 main.go:141] libmachine: (addons-647117) Ensuring network default is active
	I0829 18:06:15.325084   21003 main.go:141] libmachine: (addons-647117) Ensuring network mk-addons-647117 is active
	I0829 18:06:15.325712   21003 main.go:141] libmachine: (addons-647117) Getting domain xml...
	I0829 18:06:15.326373   21003 main.go:141] libmachine: (addons-647117) Creating domain...
	I0829 18:06:16.712917   21003 main.go:141] libmachine: (addons-647117) Waiting to get IP...
	I0829 18:06:16.713812   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:16.714232   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:16.714268   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:16.714191   21025 retry.go:31] will retry after 238.340471ms: waiting for machine to come up
	I0829 18:06:16.954554   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:16.954978   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:16.955001   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:16.954942   21025 retry.go:31] will retry after 341.720897ms: waiting for machine to come up
	I0829 18:06:17.298471   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:17.298940   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:17.298959   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:17.298900   21025 retry.go:31] will retry after 367.433652ms: waiting for machine to come up
	I0829 18:06:17.668160   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:17.668555   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:17.668592   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:17.668512   21025 retry.go:31] will retry after 516.863981ms: waiting for machine to come up
	I0829 18:06:18.187183   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:18.187670   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:18.187696   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:18.187622   21025 retry.go:31] will retry after 716.140795ms: waiting for machine to come up
	I0829 18:06:18.905500   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:18.905827   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:18.905850   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:18.905787   21025 retry.go:31] will retry after 722.824428ms: waiting for machine to come up
	I0829 18:06:19.630367   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:19.630812   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:19.630841   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:19.630788   21025 retry.go:31] will retry after 1.117686988s: waiting for machine to come up
	I0829 18:06:20.750072   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:20.750586   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:20.750618   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:20.750537   21025 retry.go:31] will retry after 1.201180121s: waiting for machine to come up
	I0829 18:06:21.953781   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:21.954227   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:21.954255   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:21.954176   21025 retry.go:31] will retry after 1.317171091s: waiting for machine to come up
	I0829 18:06:23.273606   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:23.274028   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:23.274056   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:23.273995   21025 retry.go:31] will retry after 2.013319683s: waiting for machine to come up
	I0829 18:06:25.289339   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:25.289856   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:25.289881   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:25.289815   21025 retry.go:31] will retry after 2.820105587s: waiting for machine to come up
	I0829 18:06:28.113685   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:28.113965   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:28.113988   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:28.113931   21025 retry.go:31] will retry after 2.971291296s: waiting for machine to come up
	I0829 18:06:31.088861   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:31.089282   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:31.089302   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:31.089247   21025 retry.go:31] will retry after 3.52398133s: waiting for machine to come up
	I0829 18:06:34.615265   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.615739   21003 main.go:141] libmachine: (addons-647117) Found IP for machine: 192.168.39.43
	I0829 18:06:34.615757   21003 main.go:141] libmachine: (addons-647117) Reserving static IP address...
	I0829 18:06:34.615765   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has current primary IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.616209   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find host DHCP lease matching {name: "addons-647117", mac: "52:54:00:b2:0d:0e", ip: "192.168.39.43"} in network mk-addons-647117
	I0829 18:06:34.684039   21003 main.go:141] libmachine: (addons-647117) DBG | Getting to WaitForSSH function...
	I0829 18:06:34.684068   21003 main.go:141] libmachine: (addons-647117) Reserved static IP address: 192.168.39.43
	I0829 18:06:34.684097   21003 main.go:141] libmachine: (addons-647117) Waiting for SSH to be available...
	I0829 18:06:34.686579   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.686973   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:34.687021   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.687238   21003 main.go:141] libmachine: (addons-647117) DBG | Using SSH client type: external
	I0829 18:06:34.687266   21003 main.go:141] libmachine: (addons-647117) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa (-rw-------)
	I0829 18:06:34.687303   21003 main.go:141] libmachine: (addons-647117) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:06:34.687317   21003 main.go:141] libmachine: (addons-647117) DBG | About to run SSH command:
	I0829 18:06:34.687334   21003 main.go:141] libmachine: (addons-647117) DBG | exit 0
	I0829 18:06:34.813742   21003 main.go:141] libmachine: (addons-647117) DBG | SSH cmd err, output: <nil>: 
	I0829 18:06:34.814023   21003 main.go:141] libmachine: (addons-647117) KVM machine creation complete!
	I0829 18:06:34.814355   21003 main.go:141] libmachine: (addons-647117) Calling .GetConfigRaw
	I0829 18:06:34.814860   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:34.815029   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:34.815194   21003 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:06:34.815210   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:34.816482   21003 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:06:34.816493   21003 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:06:34.816499   21003 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:06:34.816504   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:34.818985   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.819310   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:34.819338   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.819489   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:34.819706   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:34.819854   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:34.820002   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:34.820159   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:34.820371   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:34.820389   21003 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:06:34.921578   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:06:34.921611   21003 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:06:34.921625   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:34.924576   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.924991   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:34.925016   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.925174   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:34.925364   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:34.925535   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:34.925681   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:34.925862   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:34.926048   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:34.926062   21003 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:06:35.026824   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:06:35.026889   21003 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:06:35.026897   21003 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:06:35.026904   21003 main.go:141] libmachine: (addons-647117) Calling .GetMachineName
	I0829 18:06:35.027145   21003 buildroot.go:166] provisioning hostname "addons-647117"
	I0829 18:06:35.027170   21003 main.go:141] libmachine: (addons-647117) Calling .GetMachineName
	I0829 18:06:35.027344   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.029702   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.030060   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.030099   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.030232   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.030413   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.030536   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.030687   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.030879   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:35.031071   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:35.031084   21003 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-647117 && echo "addons-647117" | sudo tee /etc/hostname
	I0829 18:06:35.143742   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-647117
	
	I0829 18:06:35.143777   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.146325   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.146651   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.146679   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.146798   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.146981   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.147130   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.147305   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.147468   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:35.147673   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:35.147697   21003 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-647117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-647117/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-647117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:06:35.254118   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:06:35.254140   21003 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:06:35.254159   21003 buildroot.go:174] setting up certificates
	I0829 18:06:35.254169   21003 provision.go:84] configureAuth start
	I0829 18:06:35.254180   21003 main.go:141] libmachine: (addons-647117) Calling .GetMachineName
	I0829 18:06:35.254506   21003 main.go:141] libmachine: (addons-647117) Calling .GetIP
	I0829 18:06:35.256912   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.257308   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.257336   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.257542   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.259793   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.260096   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.260130   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.260195   21003 provision.go:143] copyHostCerts
	I0829 18:06:35.260261   21003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:06:35.260392   21003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:06:35.260483   21003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:06:35.260557   21003 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.addons-647117 san=[127.0.0.1 192.168.39.43 addons-647117 localhost minikube]
	I0829 18:06:35.482587   21003 provision.go:177] copyRemoteCerts
	I0829 18:06:35.482639   21003 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:06:35.482659   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.485179   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.485582   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.485615   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.485697   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.485936   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.486060   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.486278   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:35.563694   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:06:35.586261   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:06:35.607564   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 18:06:35.628579   21003 provision.go:87] duration metric: took 374.398756ms to configureAuth
	I0829 18:06:35.628613   21003 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:06:35.628805   21003 config.go:182] Loaded profile config "addons-647117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:35.628886   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.631347   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.631736   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.631762   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.631917   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.632078   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.632214   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.632368   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.632522   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:35.632739   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:35.632758   21003 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:06:35.841964   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:06:35.841995   21003 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:06:35.842008   21003 main.go:141] libmachine: (addons-647117) Calling .GetURL
	I0829 18:06:35.843265   21003 main.go:141] libmachine: (addons-647117) DBG | Using libvirt version 6000000
	I0829 18:06:35.845052   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.845418   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.845442   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.845675   21003 main.go:141] libmachine: Docker is up and running!
	I0829 18:06:35.845695   21003 main.go:141] libmachine: Reticulating splines...
	I0829 18:06:35.845701   21003 client.go:171] duration metric: took 21.429743968s to LocalClient.Create
	I0829 18:06:35.845719   21003 start.go:167] duration metric: took 21.429794926s to libmachine.API.Create "addons-647117"
	I0829 18:06:35.845736   21003 start.go:293] postStartSetup for "addons-647117" (driver="kvm2")
	I0829 18:06:35.845745   21003 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:06:35.845761   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:35.846039   21003 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:06:35.846062   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.848219   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.848637   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.848666   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.848784   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.848951   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.849108   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.849229   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:35.928027   21003 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:06:35.932082   21003 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:06:35.932107   21003 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 18:06:35.932175   21003 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 18:06:35.932199   21003 start.go:296] duration metric: took 86.457988ms for postStartSetup
	I0829 18:06:35.932245   21003 main.go:141] libmachine: (addons-647117) Calling .GetConfigRaw
	I0829 18:06:35.932768   21003 main.go:141] libmachine: (addons-647117) Calling .GetIP
	I0829 18:06:35.935311   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.935660   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.935689   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.935874   21003 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/config.json ...
	I0829 18:06:35.936046   21003 start.go:128] duration metric: took 21.536800088s to createHost
	I0829 18:06:35.936069   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.938226   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.938550   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.938580   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.938691   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.938940   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.939092   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.939226   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.939371   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:35.939518   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:35.939538   21003 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:06:36.038471   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724954796.013287706
	
	I0829 18:06:36.038494   21003 fix.go:216] guest clock: 1724954796.013287706
	I0829 18:06:36.038502   21003 fix.go:229] Guest: 2024-08-29 18:06:36.013287706 +0000 UTC Remote: 2024-08-29 18:06:35.936057575 +0000 UTC m=+21.991416237 (delta=77.230131ms)
	I0829 18:06:36.038547   21003 fix.go:200] guest clock delta is within tolerance: 77.230131ms
	I0829 18:06:36.038563   21003 start.go:83] releasing machines lock for "addons-647117", held for 21.639379915s
	I0829 18:06:36.038587   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:36.038894   21003 main.go:141] libmachine: (addons-647117) Calling .GetIP
	I0829 18:06:36.041687   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.042103   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:36.042129   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.042309   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:36.042820   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:36.042990   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:36.043053   21003 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:06:36.043093   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:36.043222   21003 ssh_runner.go:195] Run: cat /version.json
	I0829 18:06:36.043244   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:36.045522   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.045759   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.045868   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:36.045890   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.046150   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:36.046153   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:36.046208   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.046302   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:36.046386   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:36.046567   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:36.046570   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:36.046716   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:36.046731   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:36.046852   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:36.118579   21003 ssh_runner.go:195] Run: systemctl --version
	I0829 18:06:36.156970   21003 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:06:36.311217   21003 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:06:36.316594   21003 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:06:36.316675   21003 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:06:36.332219   21003 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:06:36.332250   21003 start.go:495] detecting cgroup driver to use...
	I0829 18:06:36.332314   21003 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:06:36.347317   21003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:06:36.360521   21003 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:06:36.360590   21003 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:06:36.373585   21003 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:06:36.386343   21003 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:06:36.502547   21003 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:06:36.637748   21003 docker.go:233] disabling docker service ...
	I0829 18:06:36.637830   21003 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:06:36.651446   21003 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:06:36.663735   21003 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:06:36.798359   21003 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:06:36.922508   21003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:06:36.935648   21003 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:06:36.952902   21003 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:06:36.952958   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:36.963059   21003 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:06:36.963140   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:36.973105   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:36.982774   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:36.992245   21003 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:06:37.001920   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:37.011179   21003 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:37.026117   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:37.035522   21003 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:06:37.043886   21003 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 18:06:37.043934   21003 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 18:06:37.055999   21003 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:06:37.064714   21003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:37.196530   21003 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:06:37.287929   21003 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:06:37.288028   21003 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:06:37.292396   21003 start.go:563] Will wait 60s for crictl version
	I0829 18:06:37.292454   21003 ssh_runner.go:195] Run: which crictl
	I0829 18:06:37.296073   21003 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:06:37.332725   21003 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:06:37.332849   21003 ssh_runner.go:195] Run: crio --version
	I0829 18:06:37.359173   21003 ssh_runner.go:195] Run: crio --version
	I0829 18:06:37.388107   21003 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:06:37.389284   21003 main.go:141] libmachine: (addons-647117) Calling .GetIP
	I0829 18:06:37.391507   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:37.391814   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:37.391841   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:37.391979   21003 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:06:37.395789   21003 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:06:37.408717   21003 kubeadm.go:883] updating cluster {Name:addons-647117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:06:37.408820   21003 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:06:37.408873   21003 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:06:37.443962   21003 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 18:06:37.444029   21003 ssh_runner.go:195] Run: which lz4
	I0829 18:06:37.447695   21003 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 18:06:37.451549   21003 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 18:06:37.451575   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 18:06:38.585685   21003 crio.go:462] duration metric: took 1.138016489s to copy over tarball
	I0829 18:06:38.585747   21003 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 18:06:40.668015   21003 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082235438s)
	I0829 18:06:40.668044   21003 crio.go:469] duration metric: took 2.082332165s to extract the tarball
	I0829 18:06:40.668052   21003 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 18:06:40.704995   21003 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:06:40.744652   21003 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:06:40.744681   21003 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:06:40.744691   21003 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.31.0 crio true true} ...
	I0829 18:06:40.744815   21003 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-647117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:06:40.744879   21003 ssh_runner.go:195] Run: crio config
	I0829 18:06:40.799521   21003 cni.go:84] Creating CNI manager for ""
	I0829 18:06:40.799538   21003 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:06:40.799554   21003 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:06:40.799578   21003 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-647117 NodeName:addons-647117 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:06:40.799725   21003 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-647117"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:06:40.799784   21003 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:06:40.809042   21003 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:06:40.809100   21003 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:06:40.817470   21003 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0829 18:06:40.832347   21003 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:06:40.846895   21003 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0829 18:06:40.861793   21003 ssh_runner.go:195] Run: grep 192.168.39.43	control-plane.minikube.internal$ /etc/hosts
	I0829 18:06:40.865178   21003 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.43	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:06:40.875661   21003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:40.982884   21003 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:06:40.997705   21003 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117 for IP: 192.168.39.43
	I0829 18:06:40.997731   21003 certs.go:194] generating shared ca certs ...
	I0829 18:06:40.997746   21003 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:40.997866   21003 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 18:06:41.043528   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt ...
	I0829 18:06:41.043558   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt: {Name:mkea6106ba4ad65ce6f8bed60295c8f24482327b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.043722   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key ...
	I0829 18:06:41.043735   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key: {Name:mke9ce6afa81d222f2c50749e4037b87a5d38dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.043805   21003 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 18:06:41.128075   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt ...
	I0829 18:06:41.128106   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt: {Name:mkdbc53401c430ff97fec9666f2d5f302313570c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.128259   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key ...
	I0829 18:06:41.128270   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key: {Name:mk367415a361fb5a9c7503ec33cd8caa1e52aa57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.128329   21003 certs.go:256] generating profile certs ...
	I0829 18:06:41.128382   21003 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.key
	I0829 18:06:41.128395   21003 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt with IP's: []
	I0829 18:06:41.221652   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt ...
	I0829 18:06:41.221679   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: {Name:mk7255e28303157d05d1b68e28117d8e36fbd22c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.221828   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.key ...
	I0829 18:06:41.221838   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.key: {Name:mkbf2b01f6f057886492f2c68b0e29df0e06c856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.222390   21003 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key.d21cc6c9
	I0829 18:06:41.222413   21003 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt.d21cc6c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.43]
	I0829 18:06:41.392081   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt.d21cc6c9 ...
	I0829 18:06:41.392114   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt.d21cc6c9: {Name:mkd530b794cbdec523005231e4a057aefd476fa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.392297   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key.d21cc6c9 ...
	I0829 18:06:41.392313   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key.d21cc6c9: {Name:mk3e2c877bb82fbb95364dcb98f1881ca9941820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.392417   21003 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt.d21cc6c9 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt
	I0829 18:06:41.392493   21003 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key.d21cc6c9 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key
	I0829 18:06:41.392538   21003 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.key
	I0829 18:06:41.392555   21003 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.crt with IP's: []
	I0829 18:06:41.549956   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.crt ...
	I0829 18:06:41.549986   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.crt: {Name:mke718e76c91b48339bb92cf2bf888e30bb5dc2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.550174   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.key ...
	I0829 18:06:41.550190   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.key: {Name:mkd9cbaa4b6e0247b270644d1a1f676717828d7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.550382   21003 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:06:41.550419   21003 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:06:41.550440   21003 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:06:41.550461   21003 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 18:06:41.551061   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:06:41.574578   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:06:41.596186   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:06:41.617109   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:06:41.638159   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:06:41.661044   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:06:41.698709   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:06:41.722591   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 18:06:41.743216   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:06:41.763431   21003 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:06:41.777864   21003 ssh_runner.go:195] Run: openssl version
	I0829 18:06:41.783206   21003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:06:41.793369   21003 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:41.797576   21003 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:41.797635   21003 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:41.803014   21003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:06:41.812720   21003 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:06:41.816257   21003 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:06:41.816304   21003 kubeadm.go:392] StartCluster: {Name:addons-647117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:41.816395   21003 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:06:41.816453   21003 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:06:41.849244   21003 cri.go:89] found id: ""
	I0829 18:06:41.849319   21003 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:06:41.858563   21003 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:06:41.867292   21003 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:06:41.876016   21003 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:06:41.876037   21003 kubeadm.go:157] found existing configuration files:
	
	I0829 18:06:41.876080   21003 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:06:41.884227   21003 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:06:41.884280   21003 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:06:41.892834   21003 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:06:41.900929   21003 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:06:41.900979   21003 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:06:41.909576   21003 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:06:41.917827   21003 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:06:41.917879   21003 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:06:41.926476   21003 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:06:41.934804   21003 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:06:41.934856   21003 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:06:41.943606   21003 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 18:06:41.992646   21003 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:06:41.992776   21003 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:06:42.092351   21003 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:06:42.092518   21003 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:06:42.092669   21003 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:06:42.101559   21003 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:06:42.104509   21003 out.go:235]   - Generating certificates and keys ...
	I0829 18:06:42.104621   21003 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:06:42.104687   21003 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:06:42.537741   21003 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:06:42.671932   21003 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:06:42.772862   21003 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:06:42.890551   21003 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:06:43.201812   21003 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:06:43.202000   21003 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-647117 localhost] and IPs [192.168.39.43 127.0.0.1 ::1]
	I0829 18:06:43.375327   21003 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:06:43.375499   21003 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-647117 localhost] and IPs [192.168.39.43 127.0.0.1 ::1]
	I0829 18:06:43.548880   21003 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:06:43.670158   21003 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:06:43.818859   21003 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:06:43.818919   21003 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:06:44.033791   21003 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:06:44.234114   21003 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:06:44.283551   21003 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:06:44.377485   21003 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:06:44.608153   21003 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:06:44.608910   21003 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:06:44.611448   21003 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:06:44.613436   21003 out.go:235]   - Booting up control plane ...
	I0829 18:06:44.613569   21003 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:06:44.613680   21003 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:06:44.613772   21003 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:06:44.628134   21003 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:06:44.634006   21003 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:06:44.634068   21003 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:06:44.748283   21003 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:06:44.748472   21003 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:06:45.249786   21003 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.995827ms
	I0829 18:06:45.249887   21003 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:06:50.747506   21003 kubeadm.go:310] [api-check] The API server is healthy after 5.501622111s
	I0829 18:06:50.761005   21003 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:06:50.778931   21003 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:06:50.804583   21003 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:06:50.804806   21003 kubeadm.go:310] [mark-control-plane] Marking the node addons-647117 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:06:50.815965   21003 kubeadm.go:310] [bootstrap-token] Using token: wiq59h.4ta20vef60ifolag
	I0829 18:06:50.817393   21003 out.go:235]   - Configuring RBAC rules ...
	I0829 18:06:50.817515   21003 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:06:50.823008   21003 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:06:50.829342   21003 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:06:50.834828   21003 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:06:50.837480   21003 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:06:50.840740   21003 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:06:51.153540   21003 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:06:51.619414   21003 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:06:52.154068   21003 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:06:52.154113   21003 kubeadm.go:310] 
	I0829 18:06:52.154186   21003 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:06:52.154195   21003 kubeadm.go:310] 
	I0829 18:06:52.154271   21003 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:06:52.154279   21003 kubeadm.go:310] 
	I0829 18:06:52.154298   21003 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:06:52.154372   21003 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:06:52.154426   21003 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:06:52.154436   21003 kubeadm.go:310] 
	I0829 18:06:52.154498   21003 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:06:52.154509   21003 kubeadm.go:310] 
	I0829 18:06:52.154564   21003 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:06:52.154571   21003 kubeadm.go:310] 
	I0829 18:06:52.154643   21003 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:06:52.154739   21003 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:06:52.154828   21003 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:06:52.154837   21003 kubeadm.go:310] 
	I0829 18:06:52.154960   21003 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:06:52.155076   21003 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:06:52.155085   21003 kubeadm.go:310] 
	I0829 18:06:52.155192   21003 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wiq59h.4ta20vef60ifolag \
	I0829 18:06:52.155350   21003 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 18:06:52.155395   21003 kubeadm.go:310] 	--control-plane 
	I0829 18:06:52.155404   21003 kubeadm.go:310] 
	I0829 18:06:52.155507   21003 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:06:52.155517   21003 kubeadm.go:310] 
	I0829 18:06:52.155624   21003 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wiq59h.4ta20vef60ifolag \
	I0829 18:06:52.155743   21003 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 18:06:52.156619   21003 kubeadm.go:310] W0829 18:06:41.972258     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:52.156965   21003 kubeadm.go:310] W0829 18:06:41.973234     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:52.157113   21003 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:06:52.157145   21003 cni.go:84] Creating CNI manager for ""
	I0829 18:06:52.157162   21003 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:06:52.158997   21003 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 18:06:52.160298   21003 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 18:06:52.169724   21003 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 18:06:52.191549   21003 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:06:52.191676   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:52.191714   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-647117 minikube.k8s.io/updated_at=2024_08_29T18_06_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=addons-647117 minikube.k8s.io/primary=true
	I0829 18:06:52.209914   21003 ops.go:34] apiserver oom_adj: -16
	I0829 18:06:52.324976   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:52.825811   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:53.325292   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:53.825112   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:54.325820   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:54.825675   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:55.325178   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:55.825703   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:56.324989   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:56.414413   21003 kubeadm.go:1113] duration metric: took 4.222809669s to wait for elevateKubeSystemPrivileges
	I0829 18:06:56.414449   21003 kubeadm.go:394] duration metric: took 14.598146711s to StartCluster
	I0829 18:06:56.414471   21003 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:56.414595   21003 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:06:56.415169   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:56.415361   21003 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:06:56.415396   21003 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:06:56.415462   21003 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:06:56.415582   21003 addons.go:69] Setting yakd=true in profile "addons-647117"
	I0829 18:06:56.415605   21003 addons.go:69] Setting registry=true in profile "addons-647117"
	I0829 18:06:56.415609   21003 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-647117"
	I0829 18:06:56.415625   21003 addons.go:69] Setting helm-tiller=true in profile "addons-647117"
	I0829 18:06:56.415629   21003 addons.go:69] Setting volcano=true in profile "addons-647117"
	I0829 18:06:56.415588   21003 addons.go:69] Setting ingress=true in profile "addons-647117"
	I0829 18:06:56.415645   21003 addons.go:234] Setting addon registry=true in "addons-647117"
	I0829 18:06:56.415651   21003 addons.go:234] Setting addon helm-tiller=true in "addons-647117"
	I0829 18:06:56.415663   21003 addons.go:234] Setting addon volcano=true in "addons-647117"
	I0829 18:06:56.415667   21003 addons.go:69] Setting volumesnapshots=true in profile "addons-647117"
	I0829 18:06:56.415668   21003 addons.go:69] Setting storage-provisioner=true in profile "addons-647117"
	I0829 18:06:56.415681   21003 addons.go:234] Setting addon volumesnapshots=true in "addons-647117"
	I0829 18:06:56.415685   21003 addons.go:234] Setting addon storage-provisioner=true in "addons-647117"
	I0829 18:06:56.415691   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415696   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415702   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415706   21003 addons.go:69] Setting inspektor-gadget=true in profile "addons-647117"
	I0829 18:06:56.415708   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415724   21003 addons.go:234] Setting addon inspektor-gadget=true in "addons-647117"
	I0829 18:06:56.415751   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415641   21003 addons.go:234] Setting addon yakd=true in "addons-647117"
	I0829 18:06:56.415802   21003 addons.go:69] Setting ingress-dns=true in profile "addons-647117"
	I0829 18:06:56.415696   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415835   21003 addons.go:234] Setting addon ingress-dns=true in "addons-647117"
	I0829 18:06:56.415836   21003 addons.go:69] Setting metrics-server=true in profile "addons-647117"
	I0829 18:06:56.415856   21003 addons.go:234] Setting addon metrics-server=true in "addons-647117"
	I0829 18:06:56.415872   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415889   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416119   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.415659   21003 addons.go:234] Setting addon ingress=true in "addons-647117"
	I0829 18:06:56.416144   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416143   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416147   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416156   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416160   21003 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-647117"
	I0829 18:06:56.416176   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416181   21003 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-647117"
	I0829 18:06:56.415611   21003 addons.go:69] Setting default-storageclass=true in profile "addons-647117"
	I0829 18:06:56.416203   21003 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-647117"
	I0829 18:06:56.416210   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416228   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416233   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416146   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416284   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.415822   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416327   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416344   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416347   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416361   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416433   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.415659   21003 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-647117"
	I0829 18:06:56.415615   21003 addons.go:69] Setting gcp-auth=true in profile "addons-647117"
	I0829 18:06:56.416493   21003 mustload.go:65] Loading cluster: addons-647117
	I0829 18:06:56.416505   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416536   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416457   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.415599   21003 addons.go:69] Setting cloud-spanner=true in profile "addons-647117"
	I0829 18:06:56.416608   21003 addons.go:234] Setting addon cloud-spanner=true in "addons-647117"
	I0829 18:06:56.416650   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416663   21003 config.go:182] Loaded profile config "addons-647117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:56.416670   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416730   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416786   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416818   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416884   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416926   21003 config.go:182] Loaded profile config "addons-647117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:56.416653   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416993   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.415606   21003 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-647117"
	I0829 18:06:56.417062   21003 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-647117"
	I0829 18:06:56.417124   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.417157   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.417190   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.417211   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.417237   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.417759   21003 out.go:177] * Verifying Kubernetes components...
	I0829 18:06:56.431414   21003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:56.436670   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0829 18:06:56.437146   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.437246   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42247
	I0829 18:06:56.437394   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36717
	I0829 18:06:56.437610   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.437628   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.437687   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38445
	I0829 18:06:56.437809   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.437950   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.438197   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.438211   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.438343   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.438359   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.438942   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.438986   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.442810   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.442949   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0829 18:06:56.442939   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.443564   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.443717   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.443773   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.444026   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.444479   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.444515   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.446472   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.446513   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.446968   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.447446   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.447153   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.447525   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.447738   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.447816   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.448300   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.448328   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.451235   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.451255   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.451627   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.452195   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.452230   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.452570   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0829 18:06:56.453048   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.453560   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.453579   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.453925   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.454471   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.454511   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.472672   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0829 18:06:56.473419   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.478181   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0829 18:06:56.478196   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36565
	I0829 18:06:56.478338   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37581
	I0829 18:06:56.478756   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.478771   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.478855   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.479244   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.479270   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.479636   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.479717   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I0829 18:06:56.479939   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.479951   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.480164   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.480179   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.480246   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.480250   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.480279   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.480366   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.480555   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.480617   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0829 18:06:56.480802   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.480928   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.480946   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.481087   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.481111   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.481293   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.481700   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.481719   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.481740   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.481751   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.482059   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I0829 18:06:56.482184   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.482473   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.482798   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.482822   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.482948   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.482978   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.483112   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.483588   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.483605   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.485285   21003 addons.go:234] Setting addon default-storageclass=true in "addons-647117"
	I0829 18:06:56.485323   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.485708   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.485742   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.485941   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0829 18:06:56.485968   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.486037   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43873
	I0829 18:06:56.486453   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.486581   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.486798   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.486833   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.487055   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.487069   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.487187   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.487201   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.487491   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.487517   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.487987   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.488025   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.488059   21003 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:06:56.488507   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.488534   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.488746   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.489095   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.489117   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.490168   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.490301   21003 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:06:56.491450   21003 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:06:56.491467   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:06:56.491485   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.492948   21003 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-647117"
	I0829 18:06:56.492988   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.493330   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.493369   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.496719   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.497204   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.497226   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.498188   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36927
	I0829 18:06:56.498268   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.498509   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.498603   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.498650   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.498793   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.499537   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.499570   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.499902   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.500440   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.500481   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.501294   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46335
	I0829 18:06:56.502049   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.502504   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.502535   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.503107   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.503657   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.503701   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.507276   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44693
	I0829 18:06:56.507768   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.508382   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.508406   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.508722   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.508861   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.510677   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.512639   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:06:56.513776   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:06:56.513797   21003 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:06:56.513817   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.515319   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37999
	I0829 18:06:56.515800   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.516786   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.516805   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.516856   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.517214   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.517235   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.517370   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.517505   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.517553   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.517600   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.517708   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.518168   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.518208   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.532347   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0829 18:06:56.532894   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0829 18:06:56.533030   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.533414   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.533591   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.533603   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.534067   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.534409   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.534422   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.534514   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.534861   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.535226   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.535924   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38505
	I0829 18:06:56.536353   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.536420   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35675
	I0829 18:06:56.536755   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.536837   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0829 18:06:56.537295   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.537312   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.537384   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.537694   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.537869   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.538075   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.538716   21003 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:06:56.538773   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.538789   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.538859   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I0829 18:06:56.539014   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.539114   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0829 18:06:56.539308   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.539327   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.539346   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.539533   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.539598   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.539646   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.540006   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.540014   21003 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:56.540022   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.540028   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:06:56.540045   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.540163   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.540232   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.540650   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.541057   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.541096   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.541262   21003 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:06:56.541638   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.541311   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.540506   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.541936   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.541939   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.541995   21003 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:06:56.543193   21003 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:56.543211   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:06:56.543229   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.544013   21003 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:56.544028   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:06:56.544045   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.545403   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.545625   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.545907   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.546106   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.546226   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.546589   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.546667   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.546715   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.547188   21003 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:06:56.547565   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.548163   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.547666   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:56.548188   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:06:56.547970   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.548506   21003 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:06:56.548516   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:06:56.548518   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:06:56.548537   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.548541   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:56.548548   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:56.548556   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:56.548563   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:06:56.548753   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.548823   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.548937   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.549134   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.549334   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:06:56.549403   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.549468   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0829 18:06:56.549564   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.549609   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.549623   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.549772   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.549834   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.549914   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.549974   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.550110   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.550260   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.550571   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:06:56.550571   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:56.550591   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	W0829 18:06:56.550660   21003 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0829 18:06:56.550690   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.550703   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.551269   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.551508   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.552601   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:06:56.552711   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.552948   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.553349   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.553376   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.553418   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.553567   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.553722   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.553833   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.554958   21003 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:06:56.554967   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:06:56.556064   21003 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:06:56.556082   21003 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:06:56.556101   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.556540   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0829 18:06:56.557101   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.557246   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:06:56.557716   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.557731   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.558069   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.558265   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.559622   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.559739   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:06:56.560081   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.560099   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.560311   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.560461   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.560522   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42051
	I0829 18:06:56.560720   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38711
	I0829 18:06:56.560690   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.560989   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.561397   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0829 18:06:56.561537   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.561727   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:06:56.561802   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.561893   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.562018   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.562038   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.562455   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0829 18:06:56.562581   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.562586   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.562691   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.562761   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.563130   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.563148   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.563265   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.563283   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.563450   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.563577   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:06:56.563731   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.563743   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.563805   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.564012   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.564052   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42963
	I0829 18:06:56.564704   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.564786   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.564795   21003 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:06:56.565163   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.565201   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.565775   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.565872   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:06:56.565953   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:06:56.565966   21003 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:06:56.565982   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.565984   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.566000   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.566529   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.566553   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.566600   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.566876   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:06:56.566891   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:06:56.566913   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.566921   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.567522   21003 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:06:56.568498   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.568666   21003 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:06:56.568680   21003 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:06:56.568693   21003 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 18:06:56.568712   21003 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:06:56.568697   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.569831   21003 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:06:56.569913   21003 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:56.569926   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:06:56.569945   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.570902   21003 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:06:56.571368   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.571392   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.571846   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.571869   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.571947   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.571967   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.572003   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.572159   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.572233   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.572258   21003 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:56.572364   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.572388   21003 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:56.572399   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:06:56.572413   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.572417   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.572536   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.572741   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.572872   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.573786   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.573963   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.574278   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.574356   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.574444   21003 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:56.574528   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.574569   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.574785   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.574857   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.575066   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.575072   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.575270   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.575284   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.575483   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.575644   21003 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:56.575656   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:06:56.575670   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.575415   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.577142   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I0829 18:06:56.577490   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.577544   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.577856   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.577875   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.578165   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.578188   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.578358   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.578394   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.578517   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.578591   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.578730   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.578852   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.582225   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.582235   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.582242   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.582251   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.582262   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.582402   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.582415   21003 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:56.582424   21003 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:06:56.582439   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.582563   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.582717   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	W0829 18:06:56.583947   21003 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35106->192.168.39.43:22: read: connection reset by peer
	I0829 18:06:56.583981   21003 retry.go:31] will retry after 265.336769ms: ssh: handshake failed: read tcp 192.168.39.1:35106->192.168.39.43:22: read: connection reset by peer
	I0829 18:06:56.585697   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.586161   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.586192   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.586351   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.586491   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.586629   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.586736   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	W0829 18:06:56.607131   21003 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35120->192.168.39.43:22: read: connection reset by peer
	I0829 18:06:56.607153   21003 retry.go:31] will retry after 305.774806ms: ssh: handshake failed: read tcp 192.168.39.1:35120->192.168.39.43:22: read: connection reset by peer
	I0829 18:06:56.875799   21003 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:06:56.875873   21003 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:06:56.927872   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:56.928816   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:57.008376   21003 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:06:57.008396   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:06:57.014179   21003 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:06:57.014203   21003 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:06:57.027140   21003 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:06:57.027167   21003 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:06:57.043157   21003 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:06:57.043177   21003 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:06:57.070356   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:57.099182   21003 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:06:57.099201   21003 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:06:57.138825   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:06:57.138848   21003 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:06:57.151051   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:57.190016   21003 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:06:57.190037   21003 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:06:57.210335   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:06:57.210355   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:06:57.221961   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:57.270521   21003 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:57.270543   21003 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:06:57.315049   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:57.332317   21003 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:06:57.332343   21003 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:06:57.365240   21003 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:06:57.365263   21003 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:06:57.370347   21003 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:57.370362   21003 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:06:57.413086   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:06:57.413118   21003 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:06:57.414407   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:06:57.414426   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:06:57.436369   21003 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:57.436388   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:06:57.485961   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:57.524473   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:57.562208   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:57.563959   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:57.571757   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:06:57.571776   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:06:57.587934   21003 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:06:57.587954   21003 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:06:57.667126   21003 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:06:57.667154   21003 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:06:57.696933   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:06:57.696960   21003 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:06:57.697118   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:06:57.697134   21003 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:06:57.826566   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:06:57.826587   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:06:57.883248   21003 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:06:57.883276   21003 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:06:57.928373   21003 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:57.928400   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:06:57.998581   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:57.998607   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:06:58.183428   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:06:58.183455   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:06:58.241042   21003 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:06:58.241068   21003 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:06:58.256257   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:58.316439   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:58.443343   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:06:58.443364   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:06:58.445449   21003 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:06:58.445468   21003 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:06:58.660398   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:06:58.660424   21003 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:06:58.662312   21003 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.786403949s)
	I0829 18:06:58.662328   21003 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.786494537s)
	I0829 18:06:58.662342   21003 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0829 18:06:58.663018   21003 node_ready.go:35] waiting up to 6m0s for node "addons-647117" to be "Ready" ...
	I0829 18:06:58.666067   21003 node_ready.go:49] node "addons-647117" has status "Ready":"True"
	I0829 18:06:58.666084   21003 node_ready.go:38] duration metric: took 3.048985ms for node "addons-647117" to be "Ready" ...
	I0829 18:06:58.666106   21003 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:06:58.676217   21003 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:58.801455   21003 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:58.801477   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:06:58.995484   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:59.015898   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:06:59.015928   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:06:59.185715   21003 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-647117" context rescaled to 1 replicas
	I0829 18:06:59.282748   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:06:59.282771   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:06:59.559451   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:59.559475   21003 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:06:59.736185   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:07:00.724928   21003 pod_ready.go:103] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:01.060208   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.13229736s)
	I0829 18:07:01.060262   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.131426124s)
	I0829 18:07:01.060266   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060279   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060285   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060293   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060306   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.989913885s)
	I0829 18:07:01.060348   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060367   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060369   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.838385594s)
	I0829 18:07:01.060384   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060397   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060352   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.909277018s)
	I0829 18:07:01.060452   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060461   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060780   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.060786   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.060796   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.060800   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.060805   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060813   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060816   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.060836   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.060843   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.060850   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060857   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060978   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.061004   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.061014   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.061023   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.061246   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.061254   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.061263   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.061270   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.061525   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.061547   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.061554   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.061561   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.061577   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.061791   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.061812   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.061818   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.062559   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.062587   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.062611   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.062618   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.062830   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.062864   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.062872   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.063136   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.063173   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.063180   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.063261   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.063273   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.238880   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.238905   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.239324   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.239339   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.239337   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.571208   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.256119707s)
	I0829 18:07:01.571266   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.571285   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.571510   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.571527   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.571536   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.571543   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.571811   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.571832   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.571841   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.681468   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.681491   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.681800   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.681893   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.681905   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.979228   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.49321647s)
	I0829 18:07:01.979257   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.454750161s)
	I0829 18:07:01.979274   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979291   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.979292   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979305   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.979329   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.417089396s)
	I0829 18:07:01.979375   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979389   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.979660   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.979674   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.979683   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979691   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.979700   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.979728   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.979734   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.979747   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.979761   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979769   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.980006   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.980037   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.980048   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.980050   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.980086   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.980094   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.980103   21003 addons.go:475] Verifying addon registry=true in "addons-647117"
	I0829 18:07:01.980373   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.980385   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.980394   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.980402   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.981457   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.981470   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.981480   21003 addons.go:475] Verifying addon metrics-server=true in "addons-647117"
	I0829 18:07:01.982538   21003 out.go:177] * Verifying registry addon...
	I0829 18:07:01.984946   21003 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:07:02.031640   21003 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:07:02.031663   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.525184   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.000875   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.183701   21003 pod_ready.go:103] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:03.491799   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.593792   21003 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:07:03.593832   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:07:03.597360   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:07:03.597814   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:07:03.597845   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:07:03.598025   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:07:03.598268   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:07:03.598470   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:07:03.598664   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:07:03.833461   21003 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:07:03.876546   21003 addons.go:234] Setting addon gcp-auth=true in "addons-647117"
	I0829 18:07:03.876598   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:07:03.876890   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:07:03.876915   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:07:03.892569   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45021
	I0829 18:07:03.893039   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:07:03.893483   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:07:03.893502   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:07:03.893860   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:07:03.894349   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:07:03.894372   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:07:03.908630   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0829 18:07:03.909028   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:07:03.909510   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:07:03.909530   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:07:03.909878   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:07:03.910100   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:07:03.911780   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:07:03.912019   21003 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:07:03.912041   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:07:03.914511   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:07:03.914935   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:07:03.914960   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:07:03.915116   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:07:03.915301   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:07:03.915464   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:07:03.915620   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:07:04.022481   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.501297   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.735718   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.17172825s)
	I0829 18:07:04.735757   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.735766   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.735865   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.479566427s)
	W0829 18:07:04.735914   21003 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:07:04.735926   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.419451964s)
	I0829 18:07:04.735958   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.735981   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.735976   21003 retry.go:31] will retry after 229.112003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:07:04.736053   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736066   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736077   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.736085   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.736150   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.740634409s)
	I0829 18:07:04.736182   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736194   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736197   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.736211   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.736215   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.736300   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:04.736221   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.736347   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736362   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736373   21003 addons.go:475] Verifying addon ingress=true in "addons-647117"
	I0829 18:07:04.736675   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:04.736697   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:04.736704   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736712   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736800   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736819   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736832   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.736840   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.737121   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:04.737148   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.737155   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.739047   21003 out.go:177] * Verifying ingress addon...
	I0829 18:07:04.739055   21003 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-647117 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:07:04.741307   21003 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:07:04.745091   21003 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:07:04.745106   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:04.965918   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:07:04.987862   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.250313   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.502670   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.726015   21003 pod_ready.go:103] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:05.763615   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.799116   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.062879943s)
	I0829 18:07:05.799136   21003 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.88709264s)
	I0829 18:07:05.799162   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:05.799177   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:05.799451   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:05.799474   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:05.799484   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:05.799493   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:05.799497   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:05.799758   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:05.799780   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:05.799790   21003 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-647117"
	I0829 18:07:05.799799   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:05.800504   21003 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:07:05.801286   21003 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:07:05.802603   21003 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:07:05.803538   21003 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:07:05.803551   21003 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:07:05.803578   21003 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:07:05.837611   21003 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:07:05.837635   21003 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:07:05.856926   21003 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:07:05.856951   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.886792   21003 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:07:05.886814   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:07:05.934598   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:07:06.250813   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.251110   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.348403   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.488440   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.745795   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.807735   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.996848   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.105783   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.139806103s)
	I0829 18:07:07.105829   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:07.105845   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:07.106137   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:07.107594   21003 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0829 18:07:07.107610   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:07.107623   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:07.107632   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:07.107958   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:07.107976   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:07.212977   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.278337274s)
	I0829 18:07:07.213038   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:07.213058   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:07.213352   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:07.213372   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:07.213383   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:07.213390   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:07.213624   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:07.213654   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:07.213671   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:07.215310   21003 addons.go:475] Verifying addon gcp-auth=true in "addons-647117"
	I0829 18:07:07.217287   21003 out.go:177] * Verifying gcp-auth addon...
	I0829 18:07:07.219398   21003 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:07:07.246816   21003 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:07:07.246836   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:07.309709   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.311474   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.490556   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.723447   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:07.746060   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.808691   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.989564   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.182573   21003 pod_ready.go:103] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:08.222445   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:08.245717   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.308826   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.489048   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.723297   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:08.745592   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.808123   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.989930   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.185160   21003 pod_ready.go:98] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:07:08 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.43 HostIPs:[{IP:192.168.39.
43}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-29 18:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-29 18:07:01 +0000 UTC,FinishedAt:2024-08-29 18:07:06 +0000 UTC,ContainerID:cri-o://d59fbec3165e2f0968560f98561b26d4cbd8d5aed4f3902715da50a6f73f1485,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://d59fbec3165e2f0968560f98561b26d4cbd8d5aed4f3902715da50a6f73f1485 Started:0xc0027c21b0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002646e00} {Name:kube-api-access-fc2r9 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002646e10}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:07:09.185196   21003 pod_ready.go:82] duration metric: took 10.508944074s for pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace to be "Ready" ...
	E0829 18:07:09.185208   21003 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:07:08 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.43 HostIPs:[{IP:192.168.39.43}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-29 18:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-29 18:07:01 +0000 UTC,FinishedAt:2024-08-29 18:07:06 +0000 UTC,ContainerID:cri-o://d59fbec3165e2f0968560f98561b26d4cbd8d5aed4f3902715da50a6f73f1485,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://d59fbec3165e2f0968560f98561b26d4cbd8d5aed4f3902715da50a6f73f1485 Started:0xc0027c21b0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002646e00} {Name:kube-api-access-fc2r9 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc002646e10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:07:09.185217   21003 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nhhtz" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.192464   21003 pod_ready.go:93] pod "coredns-6f6b679f8f-nhhtz" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.192485   21003 pod_ready.go:82] duration metric: took 7.259302ms for pod "coredns-6f6b679f8f-nhhtz" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.192494   21003 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.198684   21003 pod_ready.go:93] pod "etcd-addons-647117" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.198704   21003 pod_ready.go:82] duration metric: took 6.204777ms for pod "etcd-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.198713   21003 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.203256   21003 pod_ready.go:93] pod "kube-apiserver-addons-647117" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.203273   21003 pod_ready.go:82] duration metric: took 4.55494ms for pod "kube-apiserver-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.203282   21003 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.207437   21003 pod_ready.go:93] pod "kube-controller-manager-addons-647117" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.207455   21003 pod_ready.go:82] duration metric: took 4.167044ms for pod "kube-controller-manager-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.207464   21003 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dptz4" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.223722   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:09.326499   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.326509   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.489972   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.580220   21003 pod_ready.go:93] pod "kube-proxy-dptz4" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.580245   21003 pod_ready.go:82] duration metric: took 372.774467ms for pod "kube-proxy-dptz4" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.580257   21003 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.726036   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:09.745103   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.808109   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.980305   21003 pod_ready.go:93] pod "kube-scheduler-addons-647117" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.980340   21003 pod_ready.go:82] duration metric: took 400.073461ms for pod "kube-scheduler-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.980352   21003 pod_ready.go:39] duration metric: took 11.314232535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:07:09.980374   21003 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:07:09.980445   21003 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:07:09.988253   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:10.029423   21003 api_server.go:72] duration metric: took 13.613993413s to wait for apiserver process to appear ...
	I0829 18:07:10.029447   21003 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:07:10.029482   21003 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0829 18:07:10.033725   21003 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0829 18:07:10.034999   21003 api_server.go:141] control plane version: v1.31.0
	I0829 18:07:10.035018   21003 api_server.go:131] duration metric: took 5.56499ms to wait for apiserver health ...
	I0829 18:07:10.035026   21003 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:07:10.188946   21003 system_pods.go:59] 18 kube-system pods found
	I0829 18:07:10.188982   21003 system_pods.go:61] "coredns-6f6b679f8f-nhhtz" [bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2] Running
	I0829 18:07:10.188990   21003 system_pods.go:61] "csi-hostpath-attacher-0" [442c8a1e-b851-4b2f-a39a-da8738074897] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:07:10.188996   21003 system_pods.go:61] "csi-hostpath-resizer-0" [fb7dfca7-b2eb-492b-934b-81a33c34709a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:07:10.189004   21003 system_pods.go:61] "csi-hostpathplugin-b2xkq" [e62b7174-47eb-4ff8-a1db-76f9936a924d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:07:10.189009   21003 system_pods.go:61] "etcd-addons-647117" [9f96c1c2-351b-4af4-9c9c-89ed5623670f] Running
	I0829 18:07:10.189013   21003 system_pods.go:61] "kube-apiserver-addons-647117" [035080d0-8ea6-4d22-9861-28b1129fdabb] Running
	I0829 18:07:10.189017   21003 system_pods.go:61] "kube-controller-manager-addons-647117" [937119a3-ad43-498c-8a11-10919cd3cf8c] Running
	I0829 18:07:10.189024   21003 system_pods.go:61] "kube-ingress-dns-minikube" [a9a425c2-2fd3-4e62-be25-f26a8f87ddd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 18:07:10.189030   21003 system_pods.go:61] "kube-proxy-dptz4" [9a386c43-bd19-4ba5-a2be-6c0019adeedd] Running
	I0829 18:07:10.189035   21003 system_pods.go:61] "kube-scheduler-addons-647117" [159e6309-ac85-43f4-9c40-f6bf4ccb7035] Running
	I0829 18:07:10.189042   21003 system_pods.go:61] "metrics-server-8988944d9-9pvr6" [3d5398d7-70c3-47b5-8cb8-da262a7c5736] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:07:10.189050   21003 system_pods.go:61] "nvidia-device-plugin-daemonset-dlhxf" [ed192022-4f02-4de0-98b0-3c54ba3a49e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 18:07:10.189060   21003 system_pods.go:61] "registry-6fb4cdfc84-25kkf" [cc4a9ea4-4575-4df4-a260-191792ddc309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 18:07:10.189068   21003 system_pods.go:61] "registry-proxy-xqhqg" [dae462a3-dc8d-436d-8360-ee8d164ab845] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:07:10.189079   21003 system_pods.go:61] "snapshot-controller-56fcc65765-kgrh6" [1f305fc4-1a8a-47d0-bb41-7c8f77b1459c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.189085   21003 system_pods.go:61] "snapshot-controller-56fcc65765-kpgzh" [62b317a2-39aa-4da5-a04b-a97a0c67f06e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.189090   21003 system_pods.go:61] "storage-provisioner" [abb10014-4a67-4ddf-ba6b-89598283be68] Running
	I0829 18:07:10.189099   21003 system_pods.go:61] "tiller-deploy-b48cc5f79-bz7cs" [29de8757-9c38-4526-a266-586cd80d8d3b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0829 18:07:10.189105   21003 system_pods.go:74] duration metric: took 154.074157ms to wait for pod list to return data ...
	I0829 18:07:10.189116   21003 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:07:10.222838   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:10.247273   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.309243   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.380898   21003 default_sa.go:45] found service account: "default"
	I0829 18:07:10.380924   21003 default_sa.go:55] duration metric: took 191.802984ms for default service account to be created ...
	I0829 18:07:10.380932   21003 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:07:10.488590   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:10.584828   21003 system_pods.go:86] 18 kube-system pods found
	I0829 18:07:10.584854   21003 system_pods.go:89] "coredns-6f6b679f8f-nhhtz" [bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2] Running
	I0829 18:07:10.584864   21003 system_pods.go:89] "csi-hostpath-attacher-0" [442c8a1e-b851-4b2f-a39a-da8738074897] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:07:10.584871   21003 system_pods.go:89] "csi-hostpath-resizer-0" [fb7dfca7-b2eb-492b-934b-81a33c34709a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:07:10.584878   21003 system_pods.go:89] "csi-hostpathplugin-b2xkq" [e62b7174-47eb-4ff8-a1db-76f9936a924d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:07:10.584883   21003 system_pods.go:89] "etcd-addons-647117" [9f96c1c2-351b-4af4-9c9c-89ed5623670f] Running
	I0829 18:07:10.584888   21003 system_pods.go:89] "kube-apiserver-addons-647117" [035080d0-8ea6-4d22-9861-28b1129fdabb] Running
	I0829 18:07:10.584893   21003 system_pods.go:89] "kube-controller-manager-addons-647117" [937119a3-ad43-498c-8a11-10919cd3cf8c] Running
	I0829 18:07:10.584902   21003 system_pods.go:89] "kube-ingress-dns-minikube" [a9a425c2-2fd3-4e62-be25-f26a8f87ddd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 18:07:10.584907   21003 system_pods.go:89] "kube-proxy-dptz4" [9a386c43-bd19-4ba5-a2be-6c0019adeedd] Running
	I0829 18:07:10.584913   21003 system_pods.go:89] "kube-scheduler-addons-647117" [159e6309-ac85-43f4-9c40-f6bf4ccb7035] Running
	I0829 18:07:10.584924   21003 system_pods.go:89] "metrics-server-8988944d9-9pvr6" [3d5398d7-70c3-47b5-8cb8-da262a7c5736] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:07:10.584935   21003 system_pods.go:89] "nvidia-device-plugin-daemonset-dlhxf" [ed192022-4f02-4de0-98b0-3c54ba3a49e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 18:07:10.584945   21003 system_pods.go:89] "registry-6fb4cdfc84-25kkf" [cc4a9ea4-4575-4df4-a260-191792ddc309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 18:07:10.584950   21003 system_pods.go:89] "registry-proxy-xqhqg" [dae462a3-dc8d-436d-8360-ee8d164ab845] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:07:10.584955   21003 system_pods.go:89] "snapshot-controller-56fcc65765-kgrh6" [1f305fc4-1a8a-47d0-bb41-7c8f77b1459c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.584965   21003 system_pods.go:89] "snapshot-controller-56fcc65765-kpgzh" [62b317a2-39aa-4da5-a04b-a97a0c67f06e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.584969   21003 system_pods.go:89] "storage-provisioner" [abb10014-4a67-4ddf-ba6b-89598283be68] Running
	I0829 18:07:10.584975   21003 system_pods.go:89] "tiller-deploy-b48cc5f79-bz7cs" [29de8757-9c38-4526-a266-586cd80d8d3b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0829 18:07:10.584984   21003 system_pods.go:126] duration metric: took 204.046778ms to wait for k8s-apps to be running ...
	I0829 18:07:10.584994   21003 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:07:10.585045   21003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:07:10.626258   21003 system_svc.go:56] duration metric: took 41.254313ms WaitForService to wait for kubelet
	I0829 18:07:10.626292   21003 kubeadm.go:582] duration metric: took 14.210866708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:07:10.626318   21003 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:07:10.723351   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:10.745625   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.780607   21003 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:07:10.780633   21003 node_conditions.go:123] node cpu capacity is 2
	I0829 18:07:10.780645   21003 node_conditions.go:105] duration metric: took 154.321354ms to run NodePressure ...
	I0829 18:07:10.780656   21003 start.go:241] waiting for startup goroutines ...
	I0829 18:07:10.808661   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.432004   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:11.432056   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:11.432507   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.432753   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.531343   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:11.722334   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:11.746103   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.808992   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.988778   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:12.224840   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:12.245531   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.307880   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.488647   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:12.723996   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:12.745184   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.808714   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.988428   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:13.223147   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:13.245839   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.308973   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.875413   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.875496   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:13.875555   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.875916   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:13.988310   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:14.223406   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:14.246021   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.308758   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.489231   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:14.723115   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:14.750809   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.848451   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.989629   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:15.223214   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:15.245568   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.307971   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.488573   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:15.724020   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:15.747296   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.808899   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.989134   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:16.223214   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:16.245841   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.308609   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.489231   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:16.722831   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:16.745495   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.807750   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.988112   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:17.223152   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:17.245700   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.308534   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.490053   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:17.722271   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:17.745672   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.808093   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.989536   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:18.223076   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:18.245676   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.308003   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.488710   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:18.724041   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:18.745187   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.808284   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.988906   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:19.222566   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:19.246507   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.307703   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.488524   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:19.723848   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:19.744936   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.807986   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.989362   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:20.223136   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:20.245701   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.308166   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.488793   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:20.722701   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:20.744935   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.807920   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.989378   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:21.223255   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:21.245626   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.307716   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.488497   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:21.722746   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:21.744978   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.808369   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.989361   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:22.223301   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:22.245645   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.307754   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.488146   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:22.724753   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:22.745129   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.817804   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.989553   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:23.223526   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:23.245605   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.308356   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.488772   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:23.723300   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:23.745589   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.807597   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.988552   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:24.223387   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:24.245787   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.308121   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.489472   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:24.723639   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:24.744866   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.814322   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.989050   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:25.223626   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:25.244872   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.308113   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.489018   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:25.723187   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:25.745594   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.808380   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.990284   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:26.223467   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:26.246478   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.311430   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.489100   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:26.723298   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:26.745982   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.808347   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.989395   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:27.223619   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:27.244802   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.308288   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.488267   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:27.723514   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:27.745730   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.807863   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.989687   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:28.223318   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:28.245983   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.308333   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.488782   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:28.722485   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:28.745638   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.808513   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.991921   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:29.222789   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:29.245435   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.308533   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:29.488400   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:29.723378   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:29.745288   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.807764   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:29.989287   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:30.223850   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:30.245679   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.307898   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:30.488344   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:30.723583   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:30.745909   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.808358   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:30.989347   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:31.223420   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:31.245676   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.308106   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:31.489548   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:31.723984   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:31.752426   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.808206   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:31.988904   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:32.222648   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:32.245333   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.307744   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:32.488573   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:32.724105   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:32.825629   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.825917   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:32.989527   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:33.223029   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:33.245355   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.308032   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:33.490376   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:33.722861   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:33.745432   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.808944   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:33.992715   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:34.223303   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:34.245804   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.308469   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:34.489113   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:34.722859   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:34.745014   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.809535   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:34.990897   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:35.223016   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:35.245393   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.307861   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:35.489500   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:35.724153   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:35.745295   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.808675   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:35.992470   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:36.224494   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:36.245850   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.308073   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:36.488905   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:36.723280   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:36.745428   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.807550   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:36.989313   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:37.223233   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:37.246873   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.309007   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:37.489533   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:37.723538   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:37.745569   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.809432   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:37.989055   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:38.223047   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:38.245660   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.308142   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:38.488344   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:38.723366   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:38.745351   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.808393   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:38.988503   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:39.223854   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:39.245533   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.307984   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:39.488928   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:39.722252   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:39.746300   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.808576   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:39.989080   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:40.223015   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:40.245885   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.324651   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:40.489080   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:40.722990   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:40.745516   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.808575   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:40.988689   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:41.223013   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:41.245430   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:41.308188   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:41.489125   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:41.723598   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:41.744926   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:41.808306   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:41.989614   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:42.224132   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:42.245427   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:42.307702   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:42.489328   21003 kapi.go:107] duration metric: took 40.504379034s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:07:42.723558   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:42.745851   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:42.808681   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:43.497177   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:43.497724   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:43.497761   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:43.722981   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:43.745692   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:43.807475   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.222828   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:44.245874   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:44.325234   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.723309   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:44.745739   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:44.807721   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:45.223946   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:45.245318   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:45.309088   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:45.723267   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:45.745838   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:45.808262   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:46.223279   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:46.245972   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.308455   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:46.722988   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:46.745976   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.808159   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:47.223759   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:47.245074   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:47.308591   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:47.723579   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:47.746171   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:47.808847   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:48.223841   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:48.245152   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:48.309348   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:48.722985   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:48.745588   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:48.808431   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:49.223107   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:49.245680   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:49.308240   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:49.723337   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:49.745413   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:49.807755   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:50.223677   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:50.245190   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:50.308677   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:50.723917   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:50.745139   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:50.808544   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:51.223080   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:51.245425   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:51.308106   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:51.723688   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:51.746081   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:51.808225   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:52.223806   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:52.326377   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:52.327351   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.725059   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:52.826530   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.826759   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:53.228476   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:53.245760   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:53.309747   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:53.722617   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:53.746004   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:53.808430   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:54.517283   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:54.517839   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:54.518018   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:54.723061   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:54.746186   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:54.811981   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:55.222608   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:55.246316   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:55.308886   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:55.722235   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:55.745334   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.019434   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:56.223858   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:56.245409   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.307995   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:56.722626   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:56.745974   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.808140   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:57.223268   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:57.256102   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:57.308364   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:57.726325   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:57.745974   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:57.808877   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:58.223559   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:58.246847   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:58.312157   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:58.727333   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:58.746318   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:58.808148   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:59.222345   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:59.245913   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:59.307531   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:59.722489   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:59.745604   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:59.807676   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:00.271245   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:00.272539   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:00.308316   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:00.723754   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:00.745187   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:00.807594   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:01.223141   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:01.245994   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:01.308389   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:01.723190   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:01.745545   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:01.807926   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:02.570569   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:02.571356   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:02.571633   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:02.724397   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:02.747272   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:02.826148   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:03.223815   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:03.246608   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:03.307864   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:03.726393   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:03.828835   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:03.828904   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:04.223011   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:04.245511   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:04.308195   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:04.723188   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:04.745550   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:04.807502   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:05.223443   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:05.246051   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:05.308712   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:05.723117   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:05.745574   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:05.808834   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:06.226761   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:06.245664   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:06.307618   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:06.725180   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:06.748981   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:06.808801   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:07.226928   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:07.245835   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:07.308980   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:07.722723   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:07.745324   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:07.807345   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:08.223879   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:08.325379   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:08.325434   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:08.725790   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:08.744949   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:08.826386   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:09.223279   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:09.246040   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:09.308012   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:09.723363   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:09.746259   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:09.809000   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:10.222946   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:10.252397   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:10.326511   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:10.726046   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:10.746259   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:10.809839   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:11.223348   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:11.246062   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:11.309338   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:11.728846   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:11.749115   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:11.809623   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:12.225216   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:12.246889   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:12.308657   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:12.724225   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:12.746449   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:12.809246   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:13.224804   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:13.247079   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:13.325658   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:13.723793   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:13.745266   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:13.807779   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:14.222598   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:14.244733   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:14.308124   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:14.728165   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:14.746139   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:14.808642   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:15.223457   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:15.246721   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:15.308556   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:15.933232   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:15.936608   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:15.936821   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:16.223056   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:16.245394   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:16.307894   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:16.722613   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:16.745393   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:16.808036   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:17.224002   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:17.245283   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:17.327819   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:17.725793   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:17.744806   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:17.808170   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:18.227738   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:18.245282   21003 kapi.go:107] duration metric: took 1m13.503976561s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:08:18.329111   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:18.787939   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:18.807754   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:19.222198   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:19.308444   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:19.723855   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:19.808045   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:20.222926   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:20.307854   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:20.723764   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:20.826135   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:21.222994   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:21.307673   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:21.722977   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:21.807653   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:22.432663   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:22.432991   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:22.723932   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:22.825185   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:23.226536   21003 kapi.go:107] duration metric: took 1m16.007133625s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:08:23.228553   21003 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-647117 cluster.
	I0829 18:08:23.229841   21003 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:08:23.231235   21003 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:08:23.309308   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:23.809205   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:24.309098   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:24.808683   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:25.307456   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:25.810519   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:26.308581   21003 kapi.go:107] duration metric: took 1m20.505001944s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:08:26.310411   21003 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, default-storageclass, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0829 18:08:26.311643   21003 addons.go:510] duration metric: took 1m29.89618082s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns default-storageclass storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0829 18:08:26.311695   21003 start.go:246] waiting for cluster config update ...
	I0829 18:08:26.311717   21003 start.go:255] writing updated cluster config ...
	I0829 18:08:26.311981   21003 ssh_runner.go:195] Run: rm -f paused
	I0829 18:08:26.363273   21003 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:08:26.365265   21003 out.go:177] * Done! kubectl is now configured to use "addons-647117" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 18:17:41 addons-647117 crio[663]: time="2024-08-29 18:17:41.946472420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955461946444281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:526309,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=082d0240-780c-4d21-99a8-5b13a29b7073 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:17:41 addons-647117 crio[663]: time="2024-08-29 18:17:41.947259918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5588df0-f97b-4ecb-827b-9213e717bb69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:17:41 addons-647117 crio[663]: time="2024-08-29 18:17:41.947416523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5588df0-f97b-4ecb-827b-9213e717bb69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:17:41 addons-647117 crio[663]: time="2024-08-29 18:17:41.947902096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4f5014c540fcbb54a865fe8a20b52c6d717391d2293533251392ffe4eb0c489,PodSandboxId:9876705b70ba7858bc4fe710ca59be141bb47029ff0aff7d766c732718d0457a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1724955455292267013,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-jmjhc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"
http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87527d9a78cbc624da6d5a5af51d81e391956bda82bb792f7879611a7ae71a8e,PodSandboxId:7ca69cb81d190def1c43cfaba88e1ec79bd7df7886d4cf0686394471d20a594f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724955416279381873,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-802ad026-bf20-44ed-8a63-3b8e6e455a85,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: faa144ac-07b5-4015-a56f-848c5865ada9,},A
nnotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff3f05568585f3c0f9914c1b7ab42e408a731d22250d0e76b2eff572c7babec,PodSandboxId:2736ab6ab9133b1917e92581b91a5bb0990a074a1382a69100bec702185c6ead,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac,State:CONTAINER_EXITED,CreatedAt:1724955413213256391,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9fa9d03-fb2e-451e-a9a6-0b22782a8629,},Annotations:m
ap[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa9dfa80e7691ccc3f18d6ff3a32987a3156afab1cf7b5d184d1e5bfcc6447a,PodSandboxId:e619fe499711ec9e0f41e6a55f947385cd0cc621a172cf121affdcd0e6283eb4,Metadata:&ContainerMetadata{Name:gadget,Attempt:6,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3f7a6957d17a35365e60917cfcd237f8d2f3fe148524e452c70c09ea56306fa,State:CONTAINER_EXITED,CreatedAt:1724955229883122305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-n82kn,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40e746b1-473d-47aa-96bb-9c8d8bec2439,},Annotations:map
[string]string{io.kubernetes.container.hash: 9a47fed0,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b,PodSandboxId:56c18ca1bdb712fc2d36e56135d155949699995097b1d9b0845ba2e05c267bc2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724954902493807072,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j924p,io.kub
ernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 136c3da7-f196-4a84-9c08-9186bd2f8698,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d,PodSandboxId:1846f2ef9a5d9f2b5bcfb468955372e9a282a64acfb5d2627b9675ec734ef86d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1724954897225056963,Labels:map[string]stri
ng{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-fxhk8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80bd8a11-05a0-44c4-8808-ee33a6be01ec,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4f617161977681299f053c902914987ca27a5748a56e02e0350d8ba6218ed00e,PodSandboxId:73676cda05f2367b40f3a0c294fe814922e46484ee10d76931d7f9e16c3e8db0,Metadata:&ContainerMetadata{
Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724954880515432885,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tg7nb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52315a01-2d0e-4db7-9560-48dc7a163f0b,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f40717dc5b978d3752fff6733f2948ce86bbab32c8d036e2bd5a34fa2553c0,PodSandboxId:6ed81ce9469c299ccbbeea991ac553c96eaff7aefcfaa0ab6f012f5bb2
f8a005,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724954880370356967,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qkkdh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: af486128-f893-40e3-99de-17a3336cfaeb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b99c67dd9abcb2e7c5c7e4a6a1a5954ad5bacecc943524214ef0e81123462b2e,PodSandboxId:d2a72bf0d1935047aefe
50db74f70d27d1faac49e6098729193eb64461999702,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1724954864282203885,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-bz7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29de8757-9c38-4526-a266-586cd80d8d3b,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b634523ff8d16508d32c6c9e41494a534b970bb9c92b6c6137a7c0bf1b7a95e,PodSandboxId:55d4a995519c03224e78b3aae7fb2a4e283b8e4f033c404157b6ad5ccaeae0c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724954854895903728,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9pvr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5398d7-70c3-47b5-8cb8-da262a7c5736,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d,PodSandboxId:745776260a0afaca81afe6622c474e548ab0ebffae5400a51dbc41ce2231cc46,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1724954842683970280,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9a425c2-2fd3-4e62-be25-f26a8f87ddd1,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747,PodSandboxId:d2641f267147c021f101934a92f06ea44da3305073b26cf0734e3e5ee6c070d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724954822396476667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb1
0014-4a67-4ddf-ba6b-89598283be68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c,PodSandboxId:29673979fe79f5b16a977fc33807a5f1f34e8ff57d6f4fcdcf501ea98ff5d77d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724954819708824056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nhhtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b
2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238,PodSandboxId:ca373cf48871db0ffa7cf5f14c758639c6b5004bf3e7803ff9b67d34d91ebc78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724954817536689832,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dptz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a386c43-bd19-4ba5-a2be-6c0019adeedd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d,PodSandboxId:f1139b5439166add471956c552e948059493ccc23613d6b5ff9a9e95250c645d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724954805726750642,Labels:map[string]string{io.kubern
etes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69b030b129e3e87a334b1ddda886bebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef,PodSandboxId:63b0cbde37a9d3c11e177c2407a492016edf721ee031186475db9a6d433d914b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724954805708156025,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8dd63dd4052ba5509ca7e97f4edf66,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b,PodSandboxId:2b4c41aeae94044ae15f32459124342629d709dbfaecad1ac3dc578913c860d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724954805688104308,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17188c36fd60880dcc736dbc36b3343b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7,PodSandboxId:905af1fd51ac949e666ce8db448b384f88bb0d208bd01e6b2bcd34dcfbb88535,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724954805668036808,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348c904ca755cb9b42a3678406195788,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5588df0-f97b-4ecb-827b-9213e717bb69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:17:41 addons-647117 crio[663]: time="2024-08-29 18:17:41.987478585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2f8ea41-7b08-4871-8fc4-c91778e28289 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:17:41 addons-647117 crio[663]: time="2024-08-29 18:17:41.987603018Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2f8ea41-7b08-4871-8fc4-c91778e28289 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:17:41 addons-647117 crio[663]: time="2024-08-29 18:17:41.988754135Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2618899-2a65-4875-8785-900e5f90bacc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:17:41 addons-647117 crio[663]: time="2024-08-29 18:17:41.989831591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955461989805352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:526309,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2618899-2a65-4875-8785-900e5f90bacc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:17:41 addons-647117 crio[663]: time="2024-08-29 18:17:41.990502865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63e0bd8a-5874-4f65-955f-6dbb9998dcd8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:17:41 addons-647117 crio[663]: time="2024-08-29 18:17:41.990603334Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63e0bd8a-5874-4f65-955f-6dbb9998dcd8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:17:41 addons-647117 crio[663]: time="2024-08-29 18:17:41.991138671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4f5014c540fcbb54a865fe8a20b52c6d717391d2293533251392ffe4eb0c489,PodSandboxId:9876705b70ba7858bc4fe710ca59be141bb47029ff0aff7d766c732718d0457a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1724955455292267013,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-jmjhc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"
http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87527d9a78cbc624da6d5a5af51d81e391956bda82bb792f7879611a7ae71a8e,PodSandboxId:7ca69cb81d190def1c43cfaba88e1ec79bd7df7886d4cf0686394471d20a594f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724955416279381873,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-802ad026-bf20-44ed-8a63-3b8e6e455a85,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: faa144ac-07b5-4015-a56f-848c5865ada9,},A
nnotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff3f05568585f3c0f9914c1b7ab42e408a731d22250d0e76b2eff572c7babec,PodSandboxId:2736ab6ab9133b1917e92581b91a5bb0990a074a1382a69100bec702185c6ead,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac,State:CONTAINER_EXITED,CreatedAt:1724955413213256391,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9fa9d03-fb2e-451e-a9a6-0b22782a8629,},Annotations:m
ap[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa9dfa80e7691ccc3f18d6ff3a32987a3156afab1cf7b5d184d1e5bfcc6447a,PodSandboxId:e619fe499711ec9e0f41e6a55f947385cd0cc621a172cf121affdcd0e6283eb4,Metadata:&ContainerMetadata{Name:gadget,Attempt:6,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3f7a6957d17a35365e60917cfcd237f8d2f3fe148524e452c70c09ea56306fa,State:CONTAINER_EXITED,CreatedAt:1724955229883122305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-n82kn,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40e746b1-473d-47aa-96bb-9c8d8bec2439,},Annotations:map
[string]string{io.kubernetes.container.hash: 9a47fed0,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b,PodSandboxId:56c18ca1bdb712fc2d36e56135d155949699995097b1d9b0845ba2e05c267bc2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724954902493807072,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j924p,io.kub
ernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 136c3da7-f196-4a84-9c08-9186bd2f8698,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d,PodSandboxId:1846f2ef9a5d9f2b5bcfb468955372e9a282a64acfb5d2627b9675ec734ef86d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1724954897225056963,Labels:map[string]stri
ng{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-fxhk8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80bd8a11-05a0-44c4-8808-ee33a6be01ec,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4f617161977681299f053c902914987ca27a5748a56e02e0350d8ba6218ed00e,PodSandboxId:73676cda05f2367b40f3a0c294fe814922e46484ee10d76931d7f9e16c3e8db0,Metadata:&ContainerMetadata{
Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724954880515432885,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tg7nb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52315a01-2d0e-4db7-9560-48dc7a163f0b,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f40717dc5b978d3752fff6733f2948ce86bbab32c8d036e2bd5a34fa2553c0,PodSandboxId:6ed81ce9469c299ccbbeea991ac553c96eaff7aefcfaa0ab6f012f5bb2
f8a005,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724954880370356967,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qkkdh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: af486128-f893-40e3-99de-17a3336cfaeb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b99c67dd9abcb2e7c5c7e4a6a1a5954ad5bacecc943524214ef0e81123462b2e,PodSandboxId:d2a72bf0d1935047aefe
50db74f70d27d1faac49e6098729193eb64461999702,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1724954864282203885,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-bz7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29de8757-9c38-4526-a266-586cd80d8d3b,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b634523ff8d16508d32c6c9e41494a534b970bb9c92b6c6137a7c0bf1b7a95e,PodSandboxId:55d4a995519c03224e78b3aae7fb2a4e283b8e4f033c404157b6ad5ccaeae0c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724954854895903728,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9pvr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5398d7-70c3-47b5-8cb8-da262a7c5736,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d,PodSandboxId:745776260a0afaca81afe6622c474e548ab0ebffae5400a51dbc41ce2231cc46,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1724954842683970280,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9a425c2-2fd3-4e62-be25-f26a8f87ddd1,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747,PodSandboxId:d2641f267147c021f101934a92f06ea44da3305073b26cf0734e3e5ee6c070d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724954822396476667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb1
0014-4a67-4ddf-ba6b-89598283be68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c,PodSandboxId:29673979fe79f5b16a977fc33807a5f1f34e8ff57d6f4fcdcf501ea98ff5d77d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724954819708824056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nhhtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b
2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238,PodSandboxId:ca373cf48871db0ffa7cf5f14c758639c6b5004bf3e7803ff9b67d34d91ebc78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724954817536689832,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dptz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a386c43-bd19-4ba5-a2be-6c0019adeedd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d,PodSandboxId:f1139b5439166add471956c552e948059493ccc23613d6b5ff9a9e95250c645d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724954805726750642,Labels:map[string]string{io.kubern
etes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69b030b129e3e87a334b1ddda886bebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef,PodSandboxId:63b0cbde37a9d3c11e177c2407a492016edf721ee031186475db9a6d433d914b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724954805708156025,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8dd63dd4052ba5509ca7e97f4edf66,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b,PodSandboxId:2b4c41aeae94044ae15f32459124342629d709dbfaecad1ac3dc578913c860d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724954805688104308,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17188c36fd60880dcc736dbc36b3343b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7,PodSandboxId:905af1fd51ac949e666ce8db448b384f88bb0d208bd01e6b2bcd34dcfbb88535,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724954805668036808,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348c904ca755cb9b42a3678406195788,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63e0bd8a-5874-4f65-955f-6dbb9998dcd8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.056971640Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5bcaa40a-6120-4fd7-ab25-bb5b990269ac name=/runtime.v1.RuntimeService/Version
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.057088631Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5bcaa40a-6120-4fd7-ab25-bb5b990269ac name=/runtime.v1.RuntimeService/Version
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.059250810Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=485bb515-f9c2-4df3-9763-b2e1445f40e8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.061193936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955462061164264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:526309,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=485bb515-f9c2-4df3-9763-b2e1445f40e8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.073021693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a849b3f8-2089-415f-b1ba-9ec50e57451f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.073132046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a849b3f8-2089-415f-b1ba-9ec50e57451f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.073846881Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4f5014c540fcbb54a865fe8a20b52c6d717391d2293533251392ffe4eb0c489,PodSandboxId:9876705b70ba7858bc4fe710ca59be141bb47029ff0aff7d766c732718d0457a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1724955455292267013,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-jmjhc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"
http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87527d9a78cbc624da6d5a5af51d81e391956bda82bb792f7879611a7ae71a8e,PodSandboxId:7ca69cb81d190def1c43cfaba88e1ec79bd7df7886d4cf0686394471d20a594f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724955416279381873,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-802ad026-bf20-44ed-8a63-3b8e6e455a85,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: faa144ac-07b5-4015-a56f-848c5865ada9,},A
nnotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff3f05568585f3c0f9914c1b7ab42e408a731d22250d0e76b2eff572c7babec,PodSandboxId:2736ab6ab9133b1917e92581b91a5bb0990a074a1382a69100bec702185c6ead,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac,State:CONTAINER_EXITED,CreatedAt:1724955413213256391,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9fa9d03-fb2e-451e-a9a6-0b22782a8629,},Annotations:m
ap[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa9dfa80e7691ccc3f18d6ff3a32987a3156afab1cf7b5d184d1e5bfcc6447a,PodSandboxId:e619fe499711ec9e0f41e6a55f947385cd0cc621a172cf121affdcd0e6283eb4,Metadata:&ContainerMetadata{Name:gadget,Attempt:6,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3f7a6957d17a35365e60917cfcd237f8d2f3fe148524e452c70c09ea56306fa,State:CONTAINER_EXITED,CreatedAt:1724955229883122305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-n82kn,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40e746b1-473d-47aa-96bb-9c8d8bec2439,},Annotations:map
[string]string{io.kubernetes.container.hash: 9a47fed0,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b,PodSandboxId:56c18ca1bdb712fc2d36e56135d155949699995097b1d9b0845ba2e05c267bc2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724954902493807072,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j924p,io.kub
ernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 136c3da7-f196-4a84-9c08-9186bd2f8698,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d,PodSandboxId:1846f2ef9a5d9f2b5bcfb468955372e9a282a64acfb5d2627b9675ec734ef86d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1724954897225056963,Labels:map[string]stri
ng{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-fxhk8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80bd8a11-05a0-44c4-8808-ee33a6be01ec,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4f617161977681299f053c902914987ca27a5748a56e02e0350d8ba6218ed00e,PodSandboxId:73676cda05f2367b40f3a0c294fe814922e46484ee10d76931d7f9e16c3e8db0,Metadata:&ContainerMetadata{
Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724954880515432885,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tg7nb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52315a01-2d0e-4db7-9560-48dc7a163f0b,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f40717dc5b978d3752fff6733f2948ce86bbab32c8d036e2bd5a34fa2553c0,PodSandboxId:6ed81ce9469c299ccbbeea991ac553c96eaff7aefcfaa0ab6f012f5bb2
f8a005,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724954880370356967,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qkkdh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: af486128-f893-40e3-99de-17a3336cfaeb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b99c67dd9abcb2e7c5c7e4a6a1a5954ad5bacecc943524214ef0e81123462b2e,PodSandboxId:d2a72bf0d1935047aefe
50db74f70d27d1faac49e6098729193eb64461999702,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1724954864282203885,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-bz7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29de8757-9c38-4526-a266-586cd80d8d3b,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b634523ff8d16508d32c6c9e41494a534b970bb9c92b6c6137a7c0bf1b7a95e,PodSandboxId:55d4a995519c03224e78b3aae7fb2a4e283b8e4f033c404157b6ad5ccaeae0c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724954854895903728,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9pvr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5398d7-70c3-47b5-8cb8-da262a7c5736,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d,PodSandboxId:745776260a0afaca81afe6622c474e548ab0ebffae5400a51dbc41ce2231cc46,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1724954842683970280,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9a425c2-2fd3-4e62-be25-f26a8f87ddd1,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747,PodSandboxId:d2641f267147c021f101934a92f06ea44da3305073b26cf0734e3e5ee6c070d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724954822396476667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb1
0014-4a67-4ddf-ba6b-89598283be68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c,PodSandboxId:29673979fe79f5b16a977fc33807a5f1f34e8ff57d6f4fcdcf501ea98ff5d77d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724954819708824056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nhhtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b
2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238,PodSandboxId:ca373cf48871db0ffa7cf5f14c758639c6b5004bf3e7803ff9b67d34d91ebc78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724954817536689832,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dptz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a386c43-bd19-4ba5-a2be-6c0019adeedd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d,PodSandboxId:f1139b5439166add471956c552e948059493ccc23613d6b5ff9a9e95250c645d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724954805726750642,Labels:map[string]string{io.kubern
etes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69b030b129e3e87a334b1ddda886bebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef,PodSandboxId:63b0cbde37a9d3c11e177c2407a492016edf721ee031186475db9a6d433d914b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724954805708156025,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8dd63dd4052ba5509ca7e97f4edf66,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b,PodSandboxId:2b4c41aeae94044ae15f32459124342629d709dbfaecad1ac3dc578913c860d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724954805688104308,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17188c36fd60880dcc736dbc36b3343b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7,PodSandboxId:905af1fd51ac949e666ce8db448b384f88bb0d208bd01e6b2bcd34dcfbb88535,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724954805668036808,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348c904ca755cb9b42a3678406195788,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a849b3f8-2089-415f-b1ba-9ec50e57451f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.126482232Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6239eef9-9c2c-427d-85f5-190c71795897 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.126572807Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6239eef9-9c2c-427d-85f5-190c71795897 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.127942645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6c7080a-fc9e-4a03-ba97-ab5cf952b10d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.129425545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955462129396787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:526309,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6c7080a-fc9e-4a03-ba97-ab5cf952b10d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.130057051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf30f186-fd50-4c1b-bde8-cd0f672cdbc0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.130128270Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf30f186-fd50-4c1b-bde8-cd0f672cdbc0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:17:42 addons-647117 crio[663]: time="2024-08-29 18:17:42.130619387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4f5014c540fcbb54a865fe8a20b52c6d717391d2293533251392ffe4eb0c489,PodSandboxId:9876705b70ba7858bc4fe710ca59be141bb47029ff0aff7d766c732718d0457a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1724955455292267013,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-jmjhc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"
http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87527d9a78cbc624da6d5a5af51d81e391956bda82bb792f7879611a7ae71a8e,PodSandboxId:7ca69cb81d190def1c43cfaba88e1ec79bd7df7886d4cf0686394471d20a594f,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724955416279381873,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-802ad026-bf20-44ed-8a63-3b8e6e455a85,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: faa144ac-07b5-4015-a56f-848c5865ada9,},A
nnotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff3f05568585f3c0f9914c1b7ab42e408a731d22250d0e76b2eff572c7babec,PodSandboxId:2736ab6ab9133b1917e92581b91a5bb0990a074a1382a69100bec702185c6ead,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac,State:CONTAINER_EXITED,CreatedAt:1724955413213256391,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9fa9d03-fb2e-451e-a9a6-0b22782a8629,},Annotations:m
ap[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fa9dfa80e7691ccc3f18d6ff3a32987a3156afab1cf7b5d184d1e5bfcc6447a,PodSandboxId:e619fe499711ec9e0f41e6a55f947385cd0cc621a172cf121affdcd0e6283eb4,Metadata:&ContainerMetadata{Name:gadget,Attempt:6,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3f7a6957d17a35365e60917cfcd237f8d2f3fe148524e452c70c09ea56306fa,State:CONTAINER_EXITED,CreatedAt:1724955229883122305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-n82kn,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40e746b1-473d-47aa-96bb-9c8d8bec2439,},Annotations:map
[string]string{io.kubernetes.container.hash: 9a47fed0,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b,PodSandboxId:56c18ca1bdb712fc2d36e56135d155949699995097b1d9b0845ba2e05c267bc2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724954902493807072,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j924p,io.kub
ernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 136c3da7-f196-4a84-9c08-9186bd2f8698,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d,PodSandboxId:1846f2ef9a5d9f2b5bcfb468955372e9a282a64acfb5d2627b9675ec734ef86d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1724954897225056963,Labels:map[string]stri
ng{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-fxhk8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80bd8a11-05a0-44c4-8808-ee33a6be01ec,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:4f617161977681299f053c902914987ca27a5748a56e02e0350d8ba6218ed00e,PodSandboxId:73676cda05f2367b40f3a0c294fe814922e46484ee10d76931d7f9e16c3e8db0,Metadata:&ContainerMetadata{
Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724954880515432885,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tg7nb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52315a01-2d0e-4db7-9560-48dc7a163f0b,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f40717dc5b978d3752fff6733f2948ce86bbab32c8d036e2bd5a34fa2553c0,PodSandboxId:6ed81ce9469c299ccbbeea991ac553c96eaff7aefcfaa0ab6f012f5bb2
f8a005,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724954880370356967,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qkkdh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: af486128-f893-40e3-99de-17a3336cfaeb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b99c67dd9abcb2e7c5c7e4a6a1a5954ad5bacecc943524214ef0e81123462b2e,PodSandboxId:d2a72bf0d1935047aefe
50db74f70d27d1faac49e6098729193eb64461999702,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1724954864282203885,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-bz7cs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29de8757-9c38-4526-a266-586cd80d8d3b,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b634523ff8d16508d32c6c9e41494a534b970bb9c92b6c6137a7c0bf1b7a95e,PodSandboxId:55d4a995519c03224e78b3aae7fb2a4e283b8e4f033c404157b6ad5ccaeae0c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724954854895903728,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9pvr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5398d7-70c3-47b5-8cb8-da262a7c5736,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d,PodSandboxId:745776260a0afaca81afe6622c474e548ab0ebffae5400a51dbc41ce2231cc46,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1724954842683970280,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9a425c2-2fd3-4e62-be25-f26a8f87ddd1,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747,PodSandboxId:d2641f267147c021f101934a92f06ea44da3305073b26cf0734e3e5ee6c070d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724954822396476667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb1
0014-4a67-4ddf-ba6b-89598283be68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c,PodSandboxId:29673979fe79f5b16a977fc33807a5f1f34e8ff57d6f4fcdcf501ea98ff5d77d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724954819708824056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nhhtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b
2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238,PodSandboxId:ca373cf48871db0ffa7cf5f14c758639c6b5004bf3e7803ff9b67d34d91ebc78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724954817536689832,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dptz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a386c43-bd19-4ba5-a2be-6c0019adeedd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d,PodSandboxId:f1139b5439166add471956c552e948059493ccc23613d6b5ff9a9e95250c645d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724954805726750642,Labels:map[string]string{io.kubern
etes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69b030b129e3e87a334b1ddda886bebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef,PodSandboxId:63b0cbde37a9d3c11e177c2407a492016edf721ee031186475db9a6d433d914b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724954805708156025,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8dd63dd4052ba5509ca7e97f4edf66,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b,PodSandboxId:2b4c41aeae94044ae15f32459124342629d709dbfaecad1ac3dc578913c860d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724954805688104308,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17188c36fd60880dcc736dbc36b3343b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7,PodSandboxId:905af1fd51ac949e666ce8db448b384f88bb0d208bd01e6b2bcd34dcfbb88535,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724954805668036808,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod
.name: etcd-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348c904ca755cb9b42a3678406195788,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf30f186-fd50-4c1b-bde8-cd0f672cdbc0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c4f5014c540fc       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        6 seconds ago       Running             headlamp                  0                   9876705b70ba7       headlamp-57fb76fcdb-jmjhc
	87527d9a78cbc       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             45 seconds ago      Exited              helper-pod                0                   7ca69cb81d190       helper-pod-delete-pvc-802ad026-bf20-44ed-8a63-3b8e6e455a85
	3ff3f05568585       docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8                            49 seconds ago      Exited              busybox                   0                   2736ab6ab9133       test-local-path
	4fa9dfa80e769       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc            3 minutes ago       Exited              gadget                    6                   e619fe499711e       gadget-n82kn
	a814d0a183682       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                  0                   56c18ca1bdb71       gcp-auth-89d5ffd79-j924p
	0d4cedb7f07b0       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                0                   1846f2ef9a5d9       ingress-nginx-controller-bc57996ff-fxhk8
	4f61716197768       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              patch                     0                   73676cda05f23       ingress-nginx-admission-patch-tg7nb
	62f40717dc5b9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                    0                   6ed81ce9469c2       ingress-nginx-admission-create-qkkdh
	b99c67dd9abcb       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                  9 minutes ago       Running             tiller                    0                   d2a72bf0d1935       tiller-deploy-b48cc5f79-bz7cs
	0b634523ff8d1       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        10 minutes ago      Running             metrics-server            0                   55d4a995519c0       metrics-server-8988944d9-9pvr6
	b3f71af1c5530       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns      0                   745776260a0af       kube-ingress-dns-minikube
	c7d6293cd5ae5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   d2641f267147c       storage-provisioner
	43c5285b49b2b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             10 minutes ago      Running             coredns                   0                   29673979fe79f       coredns-6f6b679f8f-nhhtz
	20d8d4b2a5b99       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             10 minutes ago      Running             kube-proxy                0                   ca373cf48871d       kube-proxy-dptz4
	7109054cd9285       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             10 minutes ago      Running             kube-controller-manager   0                   f1139b5439166       kube-controller-manager-addons-647117
	3bbe72bf43966       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             10 minutes ago      Running             kube-scheduler            0                   63b0cbde37a9d       kube-scheduler-addons-647117
	e4037213915cc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             10 minutes ago      Running             kube-apiserver            0                   2b4c41aeae940       kube-apiserver-addons-647117
	ad53629527269       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             10 minutes ago      Running             etcd                      0                   905af1fd51ac9       etcd-addons-647117
	
	
	==> coredns [43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c] <==
	[INFO] 127.0.0.1:40023 - 21501 "HINFO IN 2107751163851146271.7937220302157701423. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011076414s
	[INFO] 10.244.0.7:57388 - 3898 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00041242s
	[INFO] 10.244.0.7:57388 - 35385 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160164s
	[INFO] 10.244.0.7:42181 - 16646 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000102891s
	[INFO] 10.244.0.7:42181 - 61211 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000143215s
	[INFO] 10.244.0.7:40451 - 5822 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096496s
	[INFO] 10.244.0.7:40451 - 10428 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000151048s
	[INFO] 10.244.0.7:50345 - 34777 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108547s
	[INFO] 10.244.0.7:50345 - 62175 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000123168s
	[INFO] 10.244.0.7:43363 - 59112 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00011089s
	[INFO] 10.244.0.7:43363 - 38637 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084266s
	[INFO] 10.244.0.7:43570 - 27914 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066159s
	[INFO] 10.244.0.7:43570 - 8968 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006745s
	[INFO] 10.244.0.7:51342 - 48058 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034576s
	[INFO] 10.244.0.7:51342 - 50108 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080216s
	[INFO] 10.244.0.7:55526 - 58103 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080655s
	[INFO] 10.244.0.7:55526 - 43765 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000491s
	[INFO] 10.244.0.22:59665 - 61483 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00046118s
	[INFO] 10.244.0.22:56522 - 61414 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001110678s
	[INFO] 10.244.0.22:56188 - 1457 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155671s
	[INFO] 10.244.0.22:42917 - 2402 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000399062s
	[INFO] 10.244.0.22:48780 - 50292 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000158469s
	[INFO] 10.244.0.22:43403 - 21131 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000069692s
	[INFO] 10.244.0.22:59530 - 50990 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001145169s
	[INFO] 10.244.0.22:57789 - 7865 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001496446s
	
	
	==> describe nodes <==
	Name:               addons-647117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-647117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=addons-647117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_06_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-647117
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:06:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-647117
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:17:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:17:24 +0000   Thu, 29 Aug 2024 18:06:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:17:24 +0000   Thu, 29 Aug 2024 18:06:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:17:24 +0000   Thu, 29 Aug 2024 18:06:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:17:24 +0000   Thu, 29 Aug 2024 18:06:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    addons-647117
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb2784d9f1e146b3adcb56f05f7d626c
	  System UUID:                eb2784d9-f1e1-46b3-adcb-56f05f7d626c
	  Boot ID:                    e13d5250-07a7-415d-bb34-b77c87eefe5b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         0s
	  gadget                      gadget-n82kn                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  gcp-auth                    gcp-auth-89d5ffd79-j924p                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  headlamp                    headlamp-57fb76fcdb-jmjhc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-fxhk8    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-nhhtz                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-647117                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-647117                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-647117       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-dptz4                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-647117                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-8988944d9-9pvr6              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 tiller-deploy-b48cc5f79-bz7cs               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-647117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-647117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-647117 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node addons-647117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node addons-647117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node addons-647117 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                10m                kubelet          Node addons-647117 status is now: NodeReady
	  Normal  RegisteredNode           10m                node-controller  Node addons-647117 event: Registered Node addons-647117 in Controller
	
	
	==> dmesg <==
	[Aug29 18:07] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.033828] kauditd_printk_skb: 175 callbacks suppressed
	[  +7.618326] kauditd_printk_skb: 36 callbacks suppressed
	[ +20.574458] kauditd_printk_skb: 4 callbacks suppressed
	[ +14.496686] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.231458] kauditd_printk_skb: 2 callbacks suppressed
	[Aug29 18:08] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.119850] kauditd_printk_skb: 65 callbacks suppressed
	[  +9.791316] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.274613] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.166700] kauditd_printk_skb: 51 callbacks suppressed
	[Aug29 18:09] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:11] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:13] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:16] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.960026] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.856149] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.076798] kauditd_printk_skb: 17 callbacks suppressed
	[Aug29 18:17] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.882088] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.437607] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.553101] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.346334] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.833680] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.005059] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7] <==
	{"level":"warn","ts":"2024-08-29T18:08:15.916414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"365.890727ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-29T18:08:15.916450Z","caller":"traceutil/trace.go:171","msg":"trace[383738334] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1140; }","duration":"365.944011ms","start":"2024-08-29T18:08:15.550499Z","end":"2024-08-29T18:08:15.916443Z","steps":["trace[383738334] 'agreement among raft nodes before linearized reading'  (duration: 365.865295ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:15.916483Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:08:15.550459Z","time spent":"366.016618ms","remote":"127.0.0.1:37584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":2,"response size":30,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "}
	{"level":"info","ts":"2024-08-29T18:08:15.915571Z","caller":"traceutil/trace.go:171","msg":"trace[1194422704] linearizableReadLoop","detail":"{readStateIndex:1171; appliedIndex:1170; }","duration":"365.049318ms","start":"2024-08-29T18:08:15.550504Z","end":"2024-08-29T18:08:15.915554Z","steps":["trace[1194422704] 'read index received'  (duration: 364.868874ms)","trace[1194422704] 'applied index is now lower than readState.Index'  (duration: 180.004µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T18:08:15.916898Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.484708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:15.916956Z","caller":"traceutil/trace.go:171","msg":"trace[83720747] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"207.515173ms","start":"2024-08-29T18:08:15.709401Z","end":"2024-08-29T18:08:15.916916Z","steps":["trace[83720747] 'agreement among raft nodes before linearized reading'  (duration: 207.462966ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:15.917503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.990133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:15.917550Z","caller":"traceutil/trace.go:171","msg":"trace[1271701390] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"186.041171ms","start":"2024-08-29T18:08:15.731500Z","end":"2024-08-29T18:08:15.917541Z","steps":["trace[1271701390] 'agreement among raft nodes before linearized reading'  (duration: 185.939215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:15.917854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.129824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:15.917882Z","caller":"traceutil/trace.go:171","msg":"trace[471033133] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"124.157063ms","start":"2024-08-29T18:08:15.793714Z","end":"2024-08-29T18:08:15.917871Z","steps":["trace[471033133] 'agreement among raft nodes before linearized reading'  (duration: 124.114367ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:08:22.406730Z","caller":"traceutil/trace.go:171","msg":"trace[351282553] linearizableReadLoop","detail":"{readStateIndex:1199; appliedIndex:1198; }","duration":"197.570563ms","start":"2024-08-29T18:08:22.209145Z","end":"2024-08-29T18:08:22.406715Z","steps":["trace[351282553] 'read index received'  (duration: 197.399929ms)","trace[351282553] 'applied index is now lower than readState.Index'  (duration: 170.126µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:08:22.407082Z","caller":"traceutil/trace.go:171","msg":"trace[1670518420] transaction","detail":"{read_only:false; response_revision:1166; number_of_response:1; }","duration":"347.190393ms","start":"2024-08-29T18:08:22.059878Z","end":"2024-08-29T18:08:22.407068Z","steps":["trace[1670518420] 'process raft request'  (duration: 346.707402ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:22.407202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:08:22.059865Z","time spent":"347.274505ms","remote":"127.0.0.1:37314","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":798,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-8988944d9-9pvr6.17f0454d6b25d4e0\" mod_revision:1131 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-8988944d9-9pvr6.17f0454d6b25d4e0\" value_size:704 lease:1009247904961359277 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-8988944d9-9pvr6.17f0454d6b25d4e0\" > >"}
	{"level":"warn","ts":"2024-08-29T18:08:22.414665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.166922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:22.414738Z","caller":"traceutil/trace.go:171","msg":"trace[241417199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1166; }","duration":"121.257071ms","start":"2024-08-29T18:08:22.293470Z","end":"2024-08-29T18:08:22.414727Z","steps":["trace[241417199] 'agreement among raft nodes before linearized reading'  (duration: 113.986108ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:22.414703Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.662655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-29T18:08:22.414842Z","caller":"traceutil/trace.go:171","msg":"trace[50687533] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:1166; }","duration":"193.845523ms","start":"2024-08-29T18:08:22.220985Z","end":"2024-08-29T18:08:22.414831Z","steps":["trace[50687533] 'agreement among raft nodes before linearized reading'  (duration: 186.452075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:22.414967Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.831006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:22.415002Z","caller":"traceutil/trace.go:171","msg":"trace[16339418] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1166; }","duration":"205.868124ms","start":"2024-08-29T18:08:22.209128Z","end":"2024-08-29T18:08:22.414996Z","steps":["trace[16339418] 'agreement among raft nodes before linearized reading'  (duration: 198.323343ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:57.579149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.73227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-647117\" ","response":"range_response_count:1 size:10787"}
	{"level":"info","ts":"2024-08-29T18:08:57.579235Z","caller":"traceutil/trace.go:171","msg":"trace[263122715] range","detail":"{range_begin:/registry/minions/addons-647117; range_end:; response_count:1; response_revision:1297; }","duration":"103.837782ms","start":"2024-08-29T18:08:57.475383Z","end":"2024-08-29T18:08:57.579221Z","steps":["trace[263122715] 'range keys from in-memory index tree'  (duration: 103.559511ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:16:47.751238Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1559}
	{"level":"info","ts":"2024-08-29T18:16:47.785191Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1559,"took":"33.367177ms","hash":750415669,"current-db-size-bytes":6561792,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3682304,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2024-08-29T18:16:47.785252Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":750415669,"revision":1559,"compact-revision":-1}
	{"level":"info","ts":"2024-08-29T18:17:34.532473Z","caller":"traceutil/trace.go:171","msg":"trace[1845162260] transaction","detail":"{read_only:false; response_revision:2387; number_of_response:1; }","duration":"292.595899ms","start":"2024-08-29T18:17:34.239840Z","end":"2024-08-29T18:17:34.532436Z","steps":["trace[1845162260] 'process raft request'  (duration: 292.224026ms)"],"step_count":1}
	
	
	==> gcp-auth [a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b] <==
	2024/08/29 18:08:26 Ready to write response ...
	2024/08/29 18:08:26 Ready to marshal response ...
	2024/08/29 18:08:26 Ready to write response ...
	2024/08/29 18:08:26 Ready to marshal response ...
	2024/08/29 18:08:26 Ready to write response ...
	2024/08/29 18:16:36 Ready to marshal response ...
	2024/08/29 18:16:36 Ready to write response ...
	2024/08/29 18:16:40 Ready to marshal response ...
	2024/08/29 18:16:40 Ready to write response ...
	2024/08/29 18:16:41 Ready to marshal response ...
	2024/08/29 18:16:41 Ready to write response ...
	2024/08/29 18:16:41 Ready to marshal response ...
	2024/08/29 18:16:41 Ready to write response ...
	2024/08/29 18:16:55 Ready to marshal response ...
	2024/08/29 18:16:55 Ready to write response ...
	2024/08/29 18:17:00 Ready to marshal response ...
	2024/08/29 18:17:00 Ready to write response ...
	2024/08/29 18:17:30 Ready to marshal response ...
	2024/08/29 18:17:30 Ready to write response ...
	2024/08/29 18:17:30 Ready to marshal response ...
	2024/08/29 18:17:30 Ready to write response ...
	2024/08/29 18:17:30 Ready to marshal response ...
	2024/08/29 18:17:30 Ready to write response ...
	2024/08/29 18:17:42 Ready to marshal response ...
	2024/08/29 18:17:42 Ready to write response ...
	
	
	==> kernel <==
	 18:17:42 up 11 min,  0 users,  load average: 0.43, 0.55, 0.47
	Linux addons-647117 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b] <==
	W0829 18:08:42.102912       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 18:08:42.103041       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0829 18:08:42.103726       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.189.204:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.189.204:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.189.204:443: connect: connection refused" logger="UnhandledError"
	I0829 18:08:42.141565       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0829 18:16:49.533111       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0829 18:17:11.753860       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0829 18:17:16.195581       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.195614       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:17:16.228724       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.228885       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:17:16.234104       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.234155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:17:16.247150       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.248440       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:17:16.358488       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.358534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 18:17:17.234989       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 18:17:17.361145       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0829 18:17:17.374275       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0829 18:17:30.375386       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.157.54"}
	I0829 18:17:42.080953       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 18:17:42.285810       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.192.244"}
	
	
	==> kube-controller-manager [7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d] <==
	W0829 18:17:25.302008       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:25.302118       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:17:25.347768       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:25.347843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:17:25.965052       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0829 18:17:25.965192       1 shared_informer.go:320] Caches are synced for resource quota
	W0829 18:17:26.169829       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:26.169885       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:17:26.487573       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0829 18:17:26.487618       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 18:17:29.496510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="6.56µs"
	I0829 18:17:30.443909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="45.053021ms"
	I0829 18:17:30.451840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="7.882762ms"
	I0829 18:17:30.451972       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="43.9µs"
	I0829 18:17:30.468244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="151.605µs"
	W0829 18:17:33.043367       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:33.043421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:17:33.260557       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:33.260592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:17:36.021511       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:17:36.021671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:17:36.165772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="111.888µs"
	I0829 18:17:36.193957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="14.78227ms"
	I0829 18:17:36.194177       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="91.464µs"
	I0829 18:17:40.847049       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="3.872µs"
	
	
	==> kube-proxy [20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:06:58.152664       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:06:58.167873       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.43"]
	E0829 18:06:58.167951       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:06:58.245676       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:06:58.245739       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:06:58.245767       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:06:58.256186       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:06:58.256510       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:06:58.256522       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:06:58.261152       1 config.go:197] "Starting service config controller"
	I0829 18:06:58.261223       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:06:58.261753       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:06:58.261762       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:06:58.262346       1 config.go:326] "Starting node config controller"
	I0829 18:06:58.262355       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:06:58.362407       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:06:58.362425       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:06:58.362435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef] <==
	W0829 18:06:48.898836       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:48.898932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.798293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:06:49.798410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.798508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:49.798538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.801096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:49.801188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.811894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:06:49.811940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.065849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 18:06:50.065949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.089891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:06:50.089949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.116438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 18:06:50.116507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.133045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:06:50.133135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.145488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:06:50.145535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.150457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:50.150555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.390065       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:06:50.390353       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 18:06:52.182506       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 18:17:40 addons-647117 kubelet[1203]: I0829 18:17:40.625864    1203 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1d1e0c4e-1e7d-407c-8b4b-c90494ed1fb1-gcp-creds\") on node \"addons-647117\" DevicePath \"\""
	Aug 29 18:17:40 addons-647117 kubelet[1203]: I0829 18:17:40.625893    1203 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8xf9c\" (UniqueName: \"kubernetes.io/projected/1d1e0c4e-1e7d-407c-8b4b-c90494ed1fb1-kube-api-access-8xf9c\") on node \"addons-647117\" DevicePath \"\""
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.186987    1203 scope.go:117] "RemoveContainer" containerID="8b079695f275ad90710f5dc746664e8a4311a13e31f1d24c0920bdf51ad943af"
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.229002    1203 scope.go:117] "RemoveContainer" containerID="8b079695f275ad90710f5dc746664e8a4311a13e31f1d24c0920bdf51ad943af"
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.232392    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgph7\" (UniqueName: \"kubernetes.io/projected/cc4a9ea4-4575-4df4-a260-191792ddc309-kube-api-access-rgph7\") pod \"cc4a9ea4-4575-4df4-a260-191792ddc309\" (UID: \"cc4a9ea4-4575-4df4-a260-191792ddc309\") "
	Aug 29 18:17:41 addons-647117 kubelet[1203]: E0829 18:17:41.233667    1203 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b079695f275ad90710f5dc746664e8a4311a13e31f1d24c0920bdf51ad943af\": container with ID starting with 8b079695f275ad90710f5dc746664e8a4311a13e31f1d24c0920bdf51ad943af not found: ID does not exist" containerID="8b079695f275ad90710f5dc746664e8a4311a13e31f1d24c0920bdf51ad943af"
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.233700    1203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b079695f275ad90710f5dc746664e8a4311a13e31f1d24c0920bdf51ad943af"} err="failed to get container status \"8b079695f275ad90710f5dc746664e8a4311a13e31f1d24c0920bdf51ad943af\": rpc error: code = NotFound desc = could not find container \"8b079695f275ad90710f5dc746664e8a4311a13e31f1d24c0920bdf51ad943af\": container with ID starting with 8b079695f275ad90710f5dc746664e8a4311a13e31f1d24c0920bdf51ad943af not found: ID does not exist"
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.233722    1203 scope.go:117] "RemoveContainer" containerID="f4eeecd69bf752072f0f9b17e1cf345079caace46780127ca38b7224b0da818c"
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.235447    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4a9ea4-4575-4df4-a260-191792ddc309-kube-api-access-rgph7" (OuterVolumeSpecName: "kube-api-access-rgph7") pod "cc4a9ea4-4575-4df4-a260-191792ddc309" (UID: "cc4a9ea4-4575-4df4-a260-191792ddc309"). InnerVolumeSpecName "kube-api-access-rgph7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.250989    1203 scope.go:117] "RemoveContainer" containerID="f4eeecd69bf752072f0f9b17e1cf345079caace46780127ca38b7224b0da818c"
	Aug 29 18:17:41 addons-647117 kubelet[1203]: E0829 18:17:41.251463    1203 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4eeecd69bf752072f0f9b17e1cf345079caace46780127ca38b7224b0da818c\": container with ID starting with f4eeecd69bf752072f0f9b17e1cf345079caace46780127ca38b7224b0da818c not found: ID does not exist" containerID="f4eeecd69bf752072f0f9b17e1cf345079caace46780127ca38b7224b0da818c"
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.251504    1203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4eeecd69bf752072f0f9b17e1cf345079caace46780127ca38b7224b0da818c"} err="failed to get container status \"f4eeecd69bf752072f0f9b17e1cf345079caace46780127ca38b7224b0da818c\": rpc error: code = NotFound desc = could not find container \"f4eeecd69bf752072f0f9b17e1cf345079caace46780127ca38b7224b0da818c\": container with ID starting with f4eeecd69bf752072f0f9b17e1cf345079caace46780127ca38b7224b0da818c not found: ID does not exist"
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.333162    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk6xd\" (UniqueName: \"kubernetes.io/projected/dae462a3-dc8d-436d-8360-ee8d164ab845-kube-api-access-jk6xd\") pod \"dae462a3-dc8d-436d-8360-ee8d164ab845\" (UID: \"dae462a3-dc8d-436d-8360-ee8d164ab845\") "
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.333362    1203 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rgph7\" (UniqueName: \"kubernetes.io/projected/cc4a9ea4-4575-4df4-a260-191792ddc309-kube-api-access-rgph7\") on node \"addons-647117\" DevicePath \"\""
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.335758    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae462a3-dc8d-436d-8360-ee8d164ab845-kube-api-access-jk6xd" (OuterVolumeSpecName: "kube-api-access-jk6xd") pod "dae462a3-dc8d-436d-8360-ee8d164ab845" (UID: "dae462a3-dc8d-436d-8360-ee8d164ab845"). InnerVolumeSpecName "kube-api-access-jk6xd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.434078    1203 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jk6xd\" (UniqueName: \"kubernetes.io/projected/dae462a3-dc8d-436d-8360-ee8d164ab845-kube-api-access-jk6xd\") on node \"addons-647117\" DevicePath \"\""
	Aug 29 18:17:41 addons-647117 kubelet[1203]: I0829 18:17:41.438933    1203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d1e0c4e-1e7d-407c-8b4b-c90494ed1fb1" path="/var/lib/kubelet/pods/1d1e0c4e-1e7d-407c-8b4b-c90494ed1fb1/volumes"
	Aug 29 18:17:41 addons-647117 kubelet[1203]: E0829 18:17:41.814094    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955461812989134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:526309,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:17:41 addons-647117 kubelet[1203]: E0829 18:17:41.814125    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955461812989134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:526309,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:17:42 addons-647117 kubelet[1203]: E0829 18:17:42.235271    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dae462a3-dc8d-436d-8360-ee8d164ab845" containerName="registry-proxy"
	Aug 29 18:17:42 addons-647117 kubelet[1203]: E0829 18:17:42.235362    1203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc4a9ea4-4575-4df4-a260-191792ddc309" containerName="registry"
	Aug 29 18:17:42 addons-647117 kubelet[1203]: I0829 18:17:42.235469    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae462a3-dc8d-436d-8360-ee8d164ab845" containerName="registry-proxy"
	Aug 29 18:17:42 addons-647117 kubelet[1203]: I0829 18:17:42.235520    1203 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4a9ea4-4575-4df4-a260-191792ddc309" containerName="registry"
	Aug 29 18:17:42 addons-647117 kubelet[1203]: I0829 18:17:42.341655    1203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb9bg\" (UniqueName: \"kubernetes.io/projected/5146adcd-04b5-44c5-bbda-6d831cc2420c-kube-api-access-bb9bg\") pod \"nginx\" (UID: \"5146adcd-04b5-44c5-bbda-6d831cc2420c\") " pod="default/nginx"
	Aug 29 18:17:42 addons-647117 kubelet[1203]: I0829 18:17:42.341714    1203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5146adcd-04b5-44c5-bbda-6d831cc2420c-gcp-creds\") pod \"nginx\" (UID: \"5146adcd-04b5-44c5-bbda-6d831cc2420c\") " pod="default/nginx"
	
	
	==> storage-provisioner [c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747] <==
	I0829 18:07:03.102621       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:07:03.125054       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:07:03.125120       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:07:03.142183       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:07:03.142357       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-647117_782fd552-7659-45f7-a993-62776dcb3c7b!
	I0829 18:07:03.143256       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8a8c384d-e72d-41a0-bfd7-8f50bdcd533c", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-647117_782fd552-7659-45f7-a993-62776dcb3c7b became leader
	I0829 18:07:03.243000       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-647117_782fd552-7659-45f7-a993-62776dcb3c7b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-647117 -n addons-647117
helpers_test.go:261: (dbg) Run:  kubectl --context addons-647117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx ingress-nginx-admission-create-qkkdh ingress-nginx-admission-patch-tg7nb
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-647117 describe pod busybox nginx ingress-nginx-admission-create-qkkdh ingress-nginx-admission-patch-tg7nb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-647117 describe pod busybox nginx ingress-nginx-admission-create-qkkdh ingress-nginx-admission-patch-tg7nb: exit status 1 (75.725336ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-647117/192.168.39.43
	Start Time:       Thu, 29 Aug 2024 18:08:26 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kj2nj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kj2nj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-647117
	  Normal   Pulling    7m48s (x4 over 9m16s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m48s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m48s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m15s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x20 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-647117/192.168.39.43
	Start Time:       Thu, 29 Aug 2024 18:17:42 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bb9bg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bb9bg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/nginx to addons-647117
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qkkdh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tg7nb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-647117 describe pod busybox nginx ingress-nginx-admission-create-qkkdh ingress-nginx-admission-patch-tg7nb: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.33s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (152.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-647117 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-647117 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-647117 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5146adcd-04b5-44c5-bbda-6d831cc2420c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5146adcd-04b5-44c5-bbda-6d831cc2420c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004609625s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-647117 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.816262515s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-647117 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.43
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-647117 addons disable ingress-dns --alsologtostderr -v=1: (1.371396039s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-647117 addons disable ingress --alsologtostderr -v=1: (7.760362294s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-647117 -n addons-647117
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-647117 logs -n 25: (1.179468149s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-105926                                                                     | download-only-105926 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| delete  | -p download-only-366415                                                                     | download-only-366415 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| delete  | -p download-only-105926                                                                     | download-only-105926 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-728877 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | binary-mirror-728877                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38491                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-728877                                                                     | binary-mirror-728877 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | addons-647117                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | addons-647117                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-647117 --wait=true                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-647117 ssh cat                                                                       | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | /opt/local-path-provisioner/pvc-802ad026-bf20-44ed-8a63-3b8e6e455a85_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:17 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-647117 addons                                                                        | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-647117 addons                                                                        | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | -p addons-647117                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | addons-647117                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | -p addons-647117                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-647117 ip                                                                            | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | addons-647117                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-647117 ssh curl -s                                                                   | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-647117 ip                                                                            | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:20 UTC | 29 Aug 24 18:20 UTC |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:20 UTC | 29 Aug 24 18:20 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:20 UTC | 29 Aug 24 18:20 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:06:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:06:13.977708   21003 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:06:13.977815   21003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:06:13.977823   21003 out.go:358] Setting ErrFile to fd 2...
	I0829 18:06:13.977827   21003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:06:13.977999   21003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:06:13.978601   21003 out.go:352] Setting JSON to false
	I0829 18:06:13.979455   21003 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2921,"bootTime":1724951853,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:06:13.979510   21003 start.go:139] virtualization: kvm guest
	I0829 18:06:14.042675   21003 out.go:177] * [addons-647117] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:06:14.104740   21003 notify.go:220] Checking for updates...
	I0829 18:06:14.167604   21003 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:06:14.229702   21003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:06:14.294106   21003 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:06:14.342682   21003 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:06:14.344101   21003 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:06:14.345367   21003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:06:14.346953   21003 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:06:14.377848   21003 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 18:06:14.379196   21003 start.go:297] selected driver: kvm2
	I0829 18:06:14.379209   21003 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:06:14.379220   21003 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:06:14.379903   21003 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:06:14.379987   21003 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:06:14.395270   21003 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:06:14.395314   21003 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:06:14.395519   21003 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:06:14.395554   21003 cni.go:84] Creating CNI manager for ""
	I0829 18:06:14.395565   21003 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:06:14.395574   21003 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:06:14.395622   21003 start.go:340] cluster config:
	{Name:addons-647117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:14.395709   21003 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:06:14.397385   21003 out.go:177] * Starting "addons-647117" primary control-plane node in "addons-647117" cluster
	I0829 18:06:14.398568   21003 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:06:14.398598   21003 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:06:14.398606   21003 cache.go:56] Caching tarball of preloaded images
	I0829 18:06:14.398682   21003 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:06:14.398692   21003 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:06:14.398994   21003 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/config.json ...
	I0829 18:06:14.399012   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/config.json: {Name:mkcc99c38dc1733f24d9d95208d6cd89ecd08f71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:14.399129   21003 start.go:360] acquireMachinesLock for addons-647117: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:06:14.399169   21003 start.go:364] duration metric: took 27.979µs to acquireMachinesLock for "addons-647117"
	I0829 18:06:14.399185   21003 start.go:93] Provisioning new machine with config: &{Name:addons-647117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:06:14.399236   21003 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 18:06:14.400651   21003 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0829 18:06:14.400800   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:14.400842   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:14.414391   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I0829 18:06:14.414771   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:14.415264   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:14.415277   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:14.415573   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:14.415698   21003 main.go:141] libmachine: (addons-647117) Calling .GetMachineName
	I0829 18:06:14.415826   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:14.415924   21003 start.go:159] libmachine.API.Create for "addons-647117" (driver="kvm2")
	I0829 18:06:14.415948   21003 client.go:168] LocalClient.Create starting
	I0829 18:06:14.415980   21003 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem
	I0829 18:06:14.569250   21003 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem
	I0829 18:06:14.895450   21003 main.go:141] libmachine: Running pre-create checks...
	I0829 18:06:14.895478   21003 main.go:141] libmachine: (addons-647117) Calling .PreCreateCheck
	I0829 18:06:14.896002   21003 main.go:141] libmachine: (addons-647117) Calling .GetConfigRaw
	I0829 18:06:14.896427   21003 main.go:141] libmachine: Creating machine...
	I0829 18:06:14.896441   21003 main.go:141] libmachine: (addons-647117) Calling .Create
	I0829 18:06:14.896565   21003 main.go:141] libmachine: (addons-647117) Creating KVM machine...
	I0829 18:06:14.897900   21003 main.go:141] libmachine: (addons-647117) DBG | found existing default KVM network
	I0829 18:06:14.898643   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:14.898505   21025 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1f0}
	I0829 18:06:14.898675   21003 main.go:141] libmachine: (addons-647117) DBG | created network xml: 
	I0829 18:06:14.898690   21003 main.go:141] libmachine: (addons-647117) DBG | <network>
	I0829 18:06:14.898701   21003 main.go:141] libmachine: (addons-647117) DBG |   <name>mk-addons-647117</name>
	I0829 18:06:14.898712   21003 main.go:141] libmachine: (addons-647117) DBG |   <dns enable='no'/>
	I0829 18:06:14.898720   21003 main.go:141] libmachine: (addons-647117) DBG |   
	I0829 18:06:14.898727   21003 main.go:141] libmachine: (addons-647117) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 18:06:14.898734   21003 main.go:141] libmachine: (addons-647117) DBG |     <dhcp>
	I0829 18:06:14.898743   21003 main.go:141] libmachine: (addons-647117) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 18:06:14.898752   21003 main.go:141] libmachine: (addons-647117) DBG |     </dhcp>
	I0829 18:06:14.898766   21003 main.go:141] libmachine: (addons-647117) DBG |   </ip>
	I0829 18:06:14.898775   21003 main.go:141] libmachine: (addons-647117) DBG |   
	I0829 18:06:14.898785   21003 main.go:141] libmachine: (addons-647117) DBG | </network>
	I0829 18:06:14.898795   21003 main.go:141] libmachine: (addons-647117) DBG | 
	I0829 18:06:14.904085   21003 main.go:141] libmachine: (addons-647117) DBG | trying to create private KVM network mk-addons-647117 192.168.39.0/24...
	I0829 18:06:14.968799   21003 main.go:141] libmachine: (addons-647117) DBG | private KVM network mk-addons-647117 192.168.39.0/24 created
	I0829 18:06:14.968849   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:14.968765   21025 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:06:14.968877   21003 main.go:141] libmachine: (addons-647117) Setting up store path in /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117 ...
	I0829 18:06:14.968903   21003 main.go:141] libmachine: (addons-647117) Building disk image from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 18:06:14.968915   21003 main.go:141] libmachine: (addons-647117) Downloading /home/jenkins/minikube-integration/19531-13056/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 18:06:15.221752   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:15.221579   21025 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa...
	I0829 18:06:15.315051   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:15.314930   21025 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/addons-647117.rawdisk...
	I0829 18:06:15.315079   21003 main.go:141] libmachine: (addons-647117) DBG | Writing magic tar header
	I0829 18:06:15.315090   21003 main.go:141] libmachine: (addons-647117) DBG | Writing SSH key tar header
	I0829 18:06:15.315098   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:15.315038   21025 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117 ...
	I0829 18:06:15.315184   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117
	I0829 18:06:15.315224   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines
	I0829 18:06:15.315248   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:06:15.315262   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117 (perms=drwx------)
	I0829 18:06:15.315273   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056
	I0829 18:06:15.315304   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:06:15.315312   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:06:15.315321   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:06:15.315328   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home
	I0829 18:06:15.315335   21003 main.go:141] libmachine: (addons-647117) DBG | Skipping /home - not owner
	I0829 18:06:15.315347   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube (perms=drwxr-xr-x)
	I0829 18:06:15.315365   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056 (perms=drwxrwxr-x)
	I0829 18:06:15.315380   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:06:15.315392   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:06:15.315402   21003 main.go:141] libmachine: (addons-647117) Creating domain...
	I0829 18:06:15.316378   21003 main.go:141] libmachine: (addons-647117) define libvirt domain using xml: 
	I0829 18:06:15.316405   21003 main.go:141] libmachine: (addons-647117) <domain type='kvm'>
	I0829 18:06:15.316415   21003 main.go:141] libmachine: (addons-647117)   <name>addons-647117</name>
	I0829 18:06:15.316423   21003 main.go:141] libmachine: (addons-647117)   <memory unit='MiB'>4000</memory>
	I0829 18:06:15.316431   21003 main.go:141] libmachine: (addons-647117)   <vcpu>2</vcpu>
	I0829 18:06:15.316442   21003 main.go:141] libmachine: (addons-647117)   <features>
	I0829 18:06:15.316449   21003 main.go:141] libmachine: (addons-647117)     <acpi/>
	I0829 18:06:15.316456   21003 main.go:141] libmachine: (addons-647117)     <apic/>
	I0829 18:06:15.316462   21003 main.go:141] libmachine: (addons-647117)     <pae/>
	I0829 18:06:15.316466   21003 main.go:141] libmachine: (addons-647117)     
	I0829 18:06:15.316471   21003 main.go:141] libmachine: (addons-647117)   </features>
	I0829 18:06:15.316478   21003 main.go:141] libmachine: (addons-647117)   <cpu mode='host-passthrough'>
	I0829 18:06:15.316485   21003 main.go:141] libmachine: (addons-647117)   
	I0829 18:06:15.316498   21003 main.go:141] libmachine: (addons-647117)   </cpu>
	I0829 18:06:15.316508   21003 main.go:141] libmachine: (addons-647117)   <os>
	I0829 18:06:15.316517   21003 main.go:141] libmachine: (addons-647117)     <type>hvm</type>
	I0829 18:06:15.316539   21003 main.go:141] libmachine: (addons-647117)     <boot dev='cdrom'/>
	I0829 18:06:15.316547   21003 main.go:141] libmachine: (addons-647117)     <boot dev='hd'/>
	I0829 18:06:15.316552   21003 main.go:141] libmachine: (addons-647117)     <bootmenu enable='no'/>
	I0829 18:06:15.316559   21003 main.go:141] libmachine: (addons-647117)   </os>
	I0829 18:06:15.316563   21003 main.go:141] libmachine: (addons-647117)   <devices>
	I0829 18:06:15.316572   21003 main.go:141] libmachine: (addons-647117)     <disk type='file' device='cdrom'>
	I0829 18:06:15.316581   21003 main.go:141] libmachine: (addons-647117)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/boot2docker.iso'/>
	I0829 18:06:15.316590   21003 main.go:141] libmachine: (addons-647117)       <target dev='hdc' bus='scsi'/>
	I0829 18:06:15.316595   21003 main.go:141] libmachine: (addons-647117)       <readonly/>
	I0829 18:06:15.316602   21003 main.go:141] libmachine: (addons-647117)     </disk>
	I0829 18:06:15.316607   21003 main.go:141] libmachine: (addons-647117)     <disk type='file' device='disk'>
	I0829 18:06:15.316626   21003 main.go:141] libmachine: (addons-647117)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:06:15.316642   21003 main.go:141] libmachine: (addons-647117)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/addons-647117.rawdisk'/>
	I0829 18:06:15.316654   21003 main.go:141] libmachine: (addons-647117)       <target dev='hda' bus='virtio'/>
	I0829 18:06:15.316661   21003 main.go:141] libmachine: (addons-647117)     </disk>
	I0829 18:06:15.316669   21003 main.go:141] libmachine: (addons-647117)     <interface type='network'>
	I0829 18:06:15.316676   21003 main.go:141] libmachine: (addons-647117)       <source network='mk-addons-647117'/>
	I0829 18:06:15.316682   21003 main.go:141] libmachine: (addons-647117)       <model type='virtio'/>
	I0829 18:06:15.316691   21003 main.go:141] libmachine: (addons-647117)     </interface>
	I0829 18:06:15.316697   21003 main.go:141] libmachine: (addons-647117)     <interface type='network'>
	I0829 18:06:15.316707   21003 main.go:141] libmachine: (addons-647117)       <source network='default'/>
	I0829 18:06:15.316722   21003 main.go:141] libmachine: (addons-647117)       <model type='virtio'/>
	I0829 18:06:15.316738   21003 main.go:141] libmachine: (addons-647117)     </interface>
	I0829 18:06:15.316747   21003 main.go:141] libmachine: (addons-647117)     <serial type='pty'>
	I0829 18:06:15.316759   21003 main.go:141] libmachine: (addons-647117)       <target port='0'/>
	I0829 18:06:15.316779   21003 main.go:141] libmachine: (addons-647117)     </serial>
	I0829 18:06:15.316794   21003 main.go:141] libmachine: (addons-647117)     <console type='pty'>
	I0829 18:06:15.316812   21003 main.go:141] libmachine: (addons-647117)       <target type='serial' port='0'/>
	I0829 18:06:15.316825   21003 main.go:141] libmachine: (addons-647117)     </console>
	I0829 18:06:15.316835   21003 main.go:141] libmachine: (addons-647117)     <rng model='virtio'>
	I0829 18:06:15.316848   21003 main.go:141] libmachine: (addons-647117)       <backend model='random'>/dev/random</backend>
	I0829 18:06:15.316855   21003 main.go:141] libmachine: (addons-647117)     </rng>
	I0829 18:06:15.316860   21003 main.go:141] libmachine: (addons-647117)     
	I0829 18:06:15.316866   21003 main.go:141] libmachine: (addons-647117)     
	I0829 18:06:15.316871   21003 main.go:141] libmachine: (addons-647117)   </devices>
	I0829 18:06:15.316880   21003 main.go:141] libmachine: (addons-647117) </domain>
	I0829 18:06:15.316887   21003 main.go:141] libmachine: (addons-647117) 
	I0829 18:06:15.323470   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:5e:cf:4e in network default
	I0829 18:06:15.324032   21003 main.go:141] libmachine: (addons-647117) Ensuring networks are active...
	I0829 18:06:15.324048   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:15.324701   21003 main.go:141] libmachine: (addons-647117) Ensuring network default is active
	I0829 18:06:15.325084   21003 main.go:141] libmachine: (addons-647117) Ensuring network mk-addons-647117 is active
	I0829 18:06:15.325712   21003 main.go:141] libmachine: (addons-647117) Getting domain xml...
	I0829 18:06:15.326373   21003 main.go:141] libmachine: (addons-647117) Creating domain...
	I0829 18:06:16.712917   21003 main.go:141] libmachine: (addons-647117) Waiting to get IP...
	I0829 18:06:16.713812   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:16.714232   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:16.714268   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:16.714191   21025 retry.go:31] will retry after 238.340471ms: waiting for machine to come up
	I0829 18:06:16.954554   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:16.954978   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:16.955001   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:16.954942   21025 retry.go:31] will retry after 341.720897ms: waiting for machine to come up
	I0829 18:06:17.298471   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:17.298940   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:17.298959   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:17.298900   21025 retry.go:31] will retry after 367.433652ms: waiting for machine to come up
	I0829 18:06:17.668160   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:17.668555   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:17.668592   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:17.668512   21025 retry.go:31] will retry after 516.863981ms: waiting for machine to come up
	I0829 18:06:18.187183   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:18.187670   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:18.187696   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:18.187622   21025 retry.go:31] will retry after 716.140795ms: waiting for machine to come up
	I0829 18:06:18.905500   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:18.905827   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:18.905850   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:18.905787   21025 retry.go:31] will retry after 722.824428ms: waiting for machine to come up
	I0829 18:06:19.630367   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:19.630812   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:19.630841   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:19.630788   21025 retry.go:31] will retry after 1.117686988s: waiting for machine to come up
	I0829 18:06:20.750072   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:20.750586   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:20.750618   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:20.750537   21025 retry.go:31] will retry after 1.201180121s: waiting for machine to come up
	I0829 18:06:21.953781   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:21.954227   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:21.954255   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:21.954176   21025 retry.go:31] will retry after 1.317171091s: waiting for machine to come up
	I0829 18:06:23.273606   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:23.274028   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:23.274056   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:23.273995   21025 retry.go:31] will retry after 2.013319683s: waiting for machine to come up
	I0829 18:06:25.289339   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:25.289856   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:25.289881   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:25.289815   21025 retry.go:31] will retry after 2.820105587s: waiting for machine to come up
	I0829 18:06:28.113685   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:28.113965   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:28.113988   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:28.113931   21025 retry.go:31] will retry after 2.971291296s: waiting for machine to come up
	I0829 18:06:31.088861   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:31.089282   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:31.089302   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:31.089247   21025 retry.go:31] will retry after 3.52398133s: waiting for machine to come up
	I0829 18:06:34.615265   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.615739   21003 main.go:141] libmachine: (addons-647117) Found IP for machine: 192.168.39.43
	I0829 18:06:34.615757   21003 main.go:141] libmachine: (addons-647117) Reserving static IP address...
	I0829 18:06:34.615765   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has current primary IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.616209   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find host DHCP lease matching {name: "addons-647117", mac: "52:54:00:b2:0d:0e", ip: "192.168.39.43"} in network mk-addons-647117
	I0829 18:06:34.684039   21003 main.go:141] libmachine: (addons-647117) DBG | Getting to WaitForSSH function...
	I0829 18:06:34.684068   21003 main.go:141] libmachine: (addons-647117) Reserved static IP address: 192.168.39.43
	I0829 18:06:34.684097   21003 main.go:141] libmachine: (addons-647117) Waiting for SSH to be available...
	I0829 18:06:34.686579   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.686973   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:34.687021   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.687238   21003 main.go:141] libmachine: (addons-647117) DBG | Using SSH client type: external
	I0829 18:06:34.687266   21003 main.go:141] libmachine: (addons-647117) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa (-rw-------)
	I0829 18:06:34.687303   21003 main.go:141] libmachine: (addons-647117) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:06:34.687317   21003 main.go:141] libmachine: (addons-647117) DBG | About to run SSH command:
	I0829 18:06:34.687334   21003 main.go:141] libmachine: (addons-647117) DBG | exit 0
	I0829 18:06:34.813742   21003 main.go:141] libmachine: (addons-647117) DBG | SSH cmd err, output: <nil>: 
	I0829 18:06:34.814023   21003 main.go:141] libmachine: (addons-647117) KVM machine creation complete!
	I0829 18:06:34.814355   21003 main.go:141] libmachine: (addons-647117) Calling .GetConfigRaw
	I0829 18:06:34.814860   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:34.815029   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:34.815194   21003 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:06:34.815210   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:34.816482   21003 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:06:34.816493   21003 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:06:34.816499   21003 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:06:34.816504   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:34.818985   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.819310   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:34.819338   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.819489   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:34.819706   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:34.819854   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:34.820002   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:34.820159   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:34.820371   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:34.820389   21003 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:06:34.921578   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:06:34.921611   21003 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:06:34.921625   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:34.924576   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.924991   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:34.925016   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.925174   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:34.925364   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:34.925535   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:34.925681   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:34.925862   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:34.926048   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:34.926062   21003 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:06:35.026824   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:06:35.026889   21003 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:06:35.026897   21003 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:06:35.026904   21003 main.go:141] libmachine: (addons-647117) Calling .GetMachineName
	I0829 18:06:35.027145   21003 buildroot.go:166] provisioning hostname "addons-647117"
	I0829 18:06:35.027170   21003 main.go:141] libmachine: (addons-647117) Calling .GetMachineName
	I0829 18:06:35.027344   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.029702   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.030060   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.030099   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.030232   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.030413   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.030536   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.030687   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.030879   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:35.031071   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:35.031084   21003 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-647117 && echo "addons-647117" | sudo tee /etc/hostname
	I0829 18:06:35.143742   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-647117
	
	I0829 18:06:35.143777   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.146325   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.146651   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.146679   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.146798   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.146981   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.147130   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.147305   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.147468   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:35.147673   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:35.147697   21003 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-647117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-647117/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-647117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:06:35.254118   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:06:35.254140   21003 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:06:35.254159   21003 buildroot.go:174] setting up certificates
	I0829 18:06:35.254169   21003 provision.go:84] configureAuth start
	I0829 18:06:35.254180   21003 main.go:141] libmachine: (addons-647117) Calling .GetMachineName
	I0829 18:06:35.254506   21003 main.go:141] libmachine: (addons-647117) Calling .GetIP
	I0829 18:06:35.256912   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.257308   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.257336   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.257542   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.259793   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.260096   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.260130   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.260195   21003 provision.go:143] copyHostCerts
	I0829 18:06:35.260261   21003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:06:35.260392   21003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:06:35.260483   21003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:06:35.260557   21003 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.addons-647117 san=[127.0.0.1 192.168.39.43 addons-647117 localhost minikube]
	I0829 18:06:35.482587   21003 provision.go:177] copyRemoteCerts
	I0829 18:06:35.482639   21003 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:06:35.482659   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.485179   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.485582   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.485615   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.485697   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.485936   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.486060   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.486278   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:35.563694   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:06:35.586261   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:06:35.607564   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 18:06:35.628579   21003 provision.go:87] duration metric: took 374.398756ms to configureAuth
	I0829 18:06:35.628613   21003 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:06:35.628805   21003 config.go:182] Loaded profile config "addons-647117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:35.628886   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.631347   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.631736   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.631762   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.631917   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.632078   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.632214   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.632368   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.632522   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:35.632739   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:35.632758   21003 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:06:35.841964   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:06:35.841995   21003 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:06:35.842008   21003 main.go:141] libmachine: (addons-647117) Calling .GetURL
	I0829 18:06:35.843265   21003 main.go:141] libmachine: (addons-647117) DBG | Using libvirt version 6000000
	I0829 18:06:35.845052   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.845418   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.845442   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.845675   21003 main.go:141] libmachine: Docker is up and running!
	I0829 18:06:35.845695   21003 main.go:141] libmachine: Reticulating splines...
	I0829 18:06:35.845701   21003 client.go:171] duration metric: took 21.429743968s to LocalClient.Create
	I0829 18:06:35.845719   21003 start.go:167] duration metric: took 21.429794926s to libmachine.API.Create "addons-647117"
	I0829 18:06:35.845736   21003 start.go:293] postStartSetup for "addons-647117" (driver="kvm2")
	I0829 18:06:35.845745   21003 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:06:35.845761   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:35.846039   21003 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:06:35.846062   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.848219   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.848637   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.848666   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.848784   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.848951   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.849108   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.849229   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:35.928027   21003 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:06:35.932082   21003 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:06:35.932107   21003 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 18:06:35.932175   21003 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 18:06:35.932199   21003 start.go:296] duration metric: took 86.457988ms for postStartSetup
	I0829 18:06:35.932245   21003 main.go:141] libmachine: (addons-647117) Calling .GetConfigRaw
	I0829 18:06:35.932768   21003 main.go:141] libmachine: (addons-647117) Calling .GetIP
	I0829 18:06:35.935311   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.935660   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.935689   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.935874   21003 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/config.json ...
	I0829 18:06:35.936046   21003 start.go:128] duration metric: took 21.536800088s to createHost
	I0829 18:06:35.936069   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.938226   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.938550   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.938580   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.938691   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.938940   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.939092   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.939226   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.939371   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:35.939518   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:35.939538   21003 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:06:36.038471   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724954796.013287706
	
	I0829 18:06:36.038494   21003 fix.go:216] guest clock: 1724954796.013287706
	I0829 18:06:36.038502   21003 fix.go:229] Guest: 2024-08-29 18:06:36.013287706 +0000 UTC Remote: 2024-08-29 18:06:35.936057575 +0000 UTC m=+21.991416237 (delta=77.230131ms)
	I0829 18:06:36.038547   21003 fix.go:200] guest clock delta is within tolerance: 77.230131ms
	I0829 18:06:36.038563   21003 start.go:83] releasing machines lock for "addons-647117", held for 21.639379915s
	I0829 18:06:36.038587   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:36.038894   21003 main.go:141] libmachine: (addons-647117) Calling .GetIP
	I0829 18:06:36.041687   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.042103   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:36.042129   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.042309   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:36.042820   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:36.042990   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:36.043053   21003 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:06:36.043093   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:36.043222   21003 ssh_runner.go:195] Run: cat /version.json
	I0829 18:06:36.043244   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:36.045522   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.045759   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.045868   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:36.045890   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.046150   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:36.046153   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:36.046208   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.046302   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:36.046386   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:36.046567   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:36.046570   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:36.046716   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:36.046731   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:36.046852   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:36.118579   21003 ssh_runner.go:195] Run: systemctl --version
	I0829 18:06:36.156970   21003 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:06:36.311217   21003 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:06:36.316594   21003 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:06:36.316675   21003 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:06:36.332219   21003 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:06:36.332250   21003 start.go:495] detecting cgroup driver to use...
	I0829 18:06:36.332314   21003 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:06:36.347317   21003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:06:36.360521   21003 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:06:36.360590   21003 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:06:36.373585   21003 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:06:36.386343   21003 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:06:36.502547   21003 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:06:36.637748   21003 docker.go:233] disabling docker service ...
	I0829 18:06:36.637830   21003 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:06:36.651446   21003 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:06:36.663735   21003 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:06:36.798359   21003 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:06:36.922508   21003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:06:36.935648   21003 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:06:36.952902   21003 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:06:36.952958   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:36.963059   21003 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:06:36.963140   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:36.973105   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:36.982774   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:36.992245   21003 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:06:37.001920   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:37.011179   21003 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:37.026117   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:37.035522   21003 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:06:37.043886   21003 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 18:06:37.043934   21003 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 18:06:37.055999   21003 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:06:37.064714   21003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:37.196530   21003 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:06:37.287929   21003 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:06:37.288028   21003 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:06:37.292396   21003 start.go:563] Will wait 60s for crictl version
	I0829 18:06:37.292454   21003 ssh_runner.go:195] Run: which crictl
	I0829 18:06:37.296073   21003 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:06:37.332725   21003 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:06:37.332849   21003 ssh_runner.go:195] Run: crio --version
	I0829 18:06:37.359173   21003 ssh_runner.go:195] Run: crio --version
	I0829 18:06:37.388107   21003 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:06:37.389284   21003 main.go:141] libmachine: (addons-647117) Calling .GetIP
	I0829 18:06:37.391507   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:37.391814   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:37.391841   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:37.391979   21003 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:06:37.395789   21003 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:06:37.408717   21003 kubeadm.go:883] updating cluster {Name:addons-647117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:06:37.408820   21003 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:06:37.408873   21003 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:06:37.443962   21003 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 18:06:37.444029   21003 ssh_runner.go:195] Run: which lz4
	I0829 18:06:37.447695   21003 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 18:06:37.451549   21003 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 18:06:37.451575   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 18:06:38.585685   21003 crio.go:462] duration metric: took 1.138016489s to copy over tarball
	I0829 18:06:38.585747   21003 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 18:06:40.668015   21003 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082235438s)
	I0829 18:06:40.668044   21003 crio.go:469] duration metric: took 2.082332165s to extract the tarball
	I0829 18:06:40.668052   21003 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 18:06:40.704995   21003 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:06:40.744652   21003 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:06:40.744681   21003 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:06:40.744691   21003 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.31.0 crio true true} ...
	I0829 18:06:40.744815   21003 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-647117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:06:40.744879   21003 ssh_runner.go:195] Run: crio config
	I0829 18:06:40.799521   21003 cni.go:84] Creating CNI manager for ""
	I0829 18:06:40.799538   21003 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:06:40.799554   21003 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:06:40.799578   21003 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-647117 NodeName:addons-647117 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:06:40.799725   21003 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-647117"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:06:40.799784   21003 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:06:40.809042   21003 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:06:40.809100   21003 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:06:40.817470   21003 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0829 18:06:40.832347   21003 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:06:40.846895   21003 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0829 18:06:40.861793   21003 ssh_runner.go:195] Run: grep 192.168.39.43	control-plane.minikube.internal$ /etc/hosts
	I0829 18:06:40.865178   21003 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.43	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:06:40.875661   21003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:40.982884   21003 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:06:40.997705   21003 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117 for IP: 192.168.39.43
	I0829 18:06:40.997731   21003 certs.go:194] generating shared ca certs ...
	I0829 18:06:40.997746   21003 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:40.997866   21003 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 18:06:41.043528   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt ...
	I0829 18:06:41.043558   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt: {Name:mkea6106ba4ad65ce6f8bed60295c8f24482327b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.043722   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key ...
	I0829 18:06:41.043735   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key: {Name:mke9ce6afa81d222f2c50749e4037b87a5d38dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.043805   21003 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 18:06:41.128075   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt ...
	I0829 18:06:41.128106   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt: {Name:mkdbc53401c430ff97fec9666f2d5f302313570c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.128259   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key ...
	I0829 18:06:41.128270   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key: {Name:mk367415a361fb5a9c7503ec33cd8caa1e52aa57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.128329   21003 certs.go:256] generating profile certs ...
	I0829 18:06:41.128382   21003 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.key
	I0829 18:06:41.128395   21003 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt with IP's: []
	I0829 18:06:41.221652   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt ...
	I0829 18:06:41.221679   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: {Name:mk7255e28303157d05d1b68e28117d8e36fbd22c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.221828   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.key ...
	I0829 18:06:41.221838   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.key: {Name:mkbf2b01f6f057886492f2c68b0e29df0e06c856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.222390   21003 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key.d21cc6c9
	I0829 18:06:41.222413   21003 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt.d21cc6c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.43]
	I0829 18:06:41.392081   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt.d21cc6c9 ...
	I0829 18:06:41.392114   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt.d21cc6c9: {Name:mkd530b794cbdec523005231e4a057aefd476fa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.392297   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key.d21cc6c9 ...
	I0829 18:06:41.392313   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key.d21cc6c9: {Name:mk3e2c877bb82fbb95364dcb98f1881ca9941820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.392417   21003 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt.d21cc6c9 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt
	I0829 18:06:41.392493   21003 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key.d21cc6c9 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key
	I0829 18:06:41.392538   21003 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.key
	I0829 18:06:41.392555   21003 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.crt with IP's: []
	I0829 18:06:41.549956   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.crt ...
	I0829 18:06:41.549986   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.crt: {Name:mke718e76c91b48339bb92cf2bf888e30bb5dc2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.550174   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.key ...
	I0829 18:06:41.550190   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.key: {Name:mkd9cbaa4b6e0247b270644d1a1f676717828d7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.550382   21003 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:06:41.550419   21003 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:06:41.550440   21003 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:06:41.550461   21003 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 18:06:41.551061   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:06:41.574578   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:06:41.596186   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:06:41.617109   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:06:41.638159   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:06:41.661044   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:06:41.698709   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:06:41.722591   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 18:06:41.743216   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:06:41.763431   21003 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:06:41.777864   21003 ssh_runner.go:195] Run: openssl version
	I0829 18:06:41.783206   21003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:06:41.793369   21003 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:41.797576   21003 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:41.797635   21003 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:41.803014   21003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:06:41.812720   21003 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:06:41.816257   21003 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:06:41.816304   21003 kubeadm.go:392] StartCluster: {Name:addons-647117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:41.816395   21003 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:06:41.816453   21003 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:06:41.849244   21003 cri.go:89] found id: ""
	I0829 18:06:41.849319   21003 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:06:41.858563   21003 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:06:41.867292   21003 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:06:41.876016   21003 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:06:41.876037   21003 kubeadm.go:157] found existing configuration files:
	
	I0829 18:06:41.876080   21003 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:06:41.884227   21003 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:06:41.884280   21003 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:06:41.892834   21003 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:06:41.900929   21003 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:06:41.900979   21003 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:06:41.909576   21003 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:06:41.917827   21003 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:06:41.917879   21003 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:06:41.926476   21003 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:06:41.934804   21003 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:06:41.934856   21003 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:06:41.943606   21003 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 18:06:41.992646   21003 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:06:41.992776   21003 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:06:42.092351   21003 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:06:42.092518   21003 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:06:42.092669   21003 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:06:42.101559   21003 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:06:42.104509   21003 out.go:235]   - Generating certificates and keys ...
	I0829 18:06:42.104621   21003 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:06:42.104687   21003 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:06:42.537741   21003 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:06:42.671932   21003 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:06:42.772862   21003 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:06:42.890551   21003 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:06:43.201812   21003 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:06:43.202000   21003 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-647117 localhost] and IPs [192.168.39.43 127.0.0.1 ::1]
	I0829 18:06:43.375327   21003 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:06:43.375499   21003 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-647117 localhost] and IPs [192.168.39.43 127.0.0.1 ::1]
	I0829 18:06:43.548880   21003 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:06:43.670158   21003 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:06:43.818859   21003 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:06:43.818919   21003 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:06:44.033791   21003 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:06:44.234114   21003 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:06:44.283551   21003 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:06:44.377485   21003 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:06:44.608153   21003 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:06:44.608910   21003 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:06:44.611448   21003 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:06:44.613436   21003 out.go:235]   - Booting up control plane ...
	I0829 18:06:44.613569   21003 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:06:44.613680   21003 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:06:44.613772   21003 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:06:44.628134   21003 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:06:44.634006   21003 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:06:44.634068   21003 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:06:44.748283   21003 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:06:44.748472   21003 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:06:45.249786   21003 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.995827ms
	I0829 18:06:45.249887   21003 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:06:50.747506   21003 kubeadm.go:310] [api-check] The API server is healthy after 5.501622111s
	I0829 18:06:50.761005   21003 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:06:50.778931   21003 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:06:50.804583   21003 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:06:50.804806   21003 kubeadm.go:310] [mark-control-plane] Marking the node addons-647117 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:06:50.815965   21003 kubeadm.go:310] [bootstrap-token] Using token: wiq59h.4ta20vef60ifolag
	I0829 18:06:50.817393   21003 out.go:235]   - Configuring RBAC rules ...
	I0829 18:06:50.817515   21003 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:06:50.823008   21003 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:06:50.829342   21003 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:06:50.834828   21003 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:06:50.837480   21003 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:06:50.840740   21003 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:06:51.153540   21003 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:06:51.619414   21003 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:06:52.154068   21003 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:06:52.154113   21003 kubeadm.go:310] 
	I0829 18:06:52.154186   21003 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:06:52.154195   21003 kubeadm.go:310] 
	I0829 18:06:52.154271   21003 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:06:52.154279   21003 kubeadm.go:310] 
	I0829 18:06:52.154298   21003 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:06:52.154372   21003 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:06:52.154426   21003 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:06:52.154436   21003 kubeadm.go:310] 
	I0829 18:06:52.154498   21003 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:06:52.154509   21003 kubeadm.go:310] 
	I0829 18:06:52.154564   21003 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:06:52.154571   21003 kubeadm.go:310] 
	I0829 18:06:52.154643   21003 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:06:52.154739   21003 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:06:52.154828   21003 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:06:52.154837   21003 kubeadm.go:310] 
	I0829 18:06:52.154960   21003 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:06:52.155076   21003 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:06:52.155085   21003 kubeadm.go:310] 
	I0829 18:06:52.155192   21003 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wiq59h.4ta20vef60ifolag \
	I0829 18:06:52.155350   21003 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 18:06:52.155395   21003 kubeadm.go:310] 	--control-plane 
	I0829 18:06:52.155404   21003 kubeadm.go:310] 
	I0829 18:06:52.155507   21003 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:06:52.155517   21003 kubeadm.go:310] 
	I0829 18:06:52.155624   21003 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wiq59h.4ta20vef60ifolag \
	I0829 18:06:52.155743   21003 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 18:06:52.156619   21003 kubeadm.go:310] W0829 18:06:41.972258     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:52.156965   21003 kubeadm.go:310] W0829 18:06:41.973234     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:52.157113   21003 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:06:52.157145   21003 cni.go:84] Creating CNI manager for ""
	I0829 18:06:52.157162   21003 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:06:52.158997   21003 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 18:06:52.160298   21003 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 18:06:52.169724   21003 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 18:06:52.191549   21003 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:06:52.191676   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:52.191714   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-647117 minikube.k8s.io/updated_at=2024_08_29T18_06_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=addons-647117 minikube.k8s.io/primary=true
	I0829 18:06:52.209914   21003 ops.go:34] apiserver oom_adj: -16
	I0829 18:06:52.324976   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:52.825811   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:53.325292   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:53.825112   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:54.325820   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:54.825675   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:55.325178   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:55.825703   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:56.324989   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:56.414413   21003 kubeadm.go:1113] duration metric: took 4.222809669s to wait for elevateKubeSystemPrivileges
	I0829 18:06:56.414449   21003 kubeadm.go:394] duration metric: took 14.598146711s to StartCluster
	I0829 18:06:56.414471   21003 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:56.414595   21003 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:06:56.415169   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:56.415361   21003 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:06:56.415396   21003 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:06:56.415462   21003 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:06:56.415582   21003 addons.go:69] Setting yakd=true in profile "addons-647117"
	I0829 18:06:56.415605   21003 addons.go:69] Setting registry=true in profile "addons-647117"
	I0829 18:06:56.415609   21003 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-647117"
	I0829 18:06:56.415625   21003 addons.go:69] Setting helm-tiller=true in profile "addons-647117"
	I0829 18:06:56.415629   21003 addons.go:69] Setting volcano=true in profile "addons-647117"
	I0829 18:06:56.415588   21003 addons.go:69] Setting ingress=true in profile "addons-647117"
	I0829 18:06:56.415645   21003 addons.go:234] Setting addon registry=true in "addons-647117"
	I0829 18:06:56.415651   21003 addons.go:234] Setting addon helm-tiller=true in "addons-647117"
	I0829 18:06:56.415663   21003 addons.go:234] Setting addon volcano=true in "addons-647117"
	I0829 18:06:56.415667   21003 addons.go:69] Setting volumesnapshots=true in profile "addons-647117"
	I0829 18:06:56.415668   21003 addons.go:69] Setting storage-provisioner=true in profile "addons-647117"
	I0829 18:06:56.415681   21003 addons.go:234] Setting addon volumesnapshots=true in "addons-647117"
	I0829 18:06:56.415685   21003 addons.go:234] Setting addon storage-provisioner=true in "addons-647117"
	I0829 18:06:56.415691   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415696   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415702   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415706   21003 addons.go:69] Setting inspektor-gadget=true in profile "addons-647117"
	I0829 18:06:56.415708   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415724   21003 addons.go:234] Setting addon inspektor-gadget=true in "addons-647117"
	I0829 18:06:56.415751   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415641   21003 addons.go:234] Setting addon yakd=true in "addons-647117"
	I0829 18:06:56.415802   21003 addons.go:69] Setting ingress-dns=true in profile "addons-647117"
	I0829 18:06:56.415696   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415835   21003 addons.go:234] Setting addon ingress-dns=true in "addons-647117"
	I0829 18:06:56.415836   21003 addons.go:69] Setting metrics-server=true in profile "addons-647117"
	I0829 18:06:56.415856   21003 addons.go:234] Setting addon metrics-server=true in "addons-647117"
	I0829 18:06:56.415872   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415889   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416119   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.415659   21003 addons.go:234] Setting addon ingress=true in "addons-647117"
	I0829 18:06:56.416144   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416143   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416147   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416156   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416160   21003 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-647117"
	I0829 18:06:56.416176   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416181   21003 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-647117"
	I0829 18:06:56.415611   21003 addons.go:69] Setting default-storageclass=true in profile "addons-647117"
	I0829 18:06:56.416203   21003 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-647117"
	I0829 18:06:56.416210   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416228   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416233   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416146   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416284   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.415822   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416327   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416344   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416347   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416361   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416433   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.415659   21003 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-647117"
	I0829 18:06:56.415615   21003 addons.go:69] Setting gcp-auth=true in profile "addons-647117"
	I0829 18:06:56.416493   21003 mustload.go:65] Loading cluster: addons-647117
	I0829 18:06:56.416505   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416536   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416457   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.415599   21003 addons.go:69] Setting cloud-spanner=true in profile "addons-647117"
	I0829 18:06:56.416608   21003 addons.go:234] Setting addon cloud-spanner=true in "addons-647117"
	I0829 18:06:56.416650   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416663   21003 config.go:182] Loaded profile config "addons-647117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:56.416670   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416730   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416786   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416818   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416884   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416926   21003 config.go:182] Loaded profile config "addons-647117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:56.416653   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416993   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.415606   21003 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-647117"
	I0829 18:06:56.417062   21003 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-647117"
	I0829 18:06:56.417124   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.417157   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.417190   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.417211   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.417237   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.417759   21003 out.go:177] * Verifying Kubernetes components...
	I0829 18:06:56.431414   21003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:56.436670   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0829 18:06:56.437146   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.437246   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42247
	I0829 18:06:56.437394   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36717
	I0829 18:06:56.437610   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.437628   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.437687   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38445
	I0829 18:06:56.437809   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.437950   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.438197   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.438211   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.438343   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.438359   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.438942   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.438986   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.442810   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.442949   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0829 18:06:56.442939   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.443564   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.443717   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.443773   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.444026   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.444479   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.444515   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.446472   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.446513   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.446968   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.447446   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.447153   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.447525   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.447738   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.447816   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.448300   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.448328   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.451235   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.451255   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.451627   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.452195   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.452230   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.452570   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0829 18:06:56.453048   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.453560   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.453579   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.453925   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.454471   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.454511   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.472672   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0829 18:06:56.473419   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.478181   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0829 18:06:56.478196   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36565
	I0829 18:06:56.478338   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37581
	I0829 18:06:56.478756   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.478771   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.478855   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.479244   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.479270   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.479636   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.479717   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I0829 18:06:56.479939   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.479951   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.480164   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.480179   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.480246   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.480250   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.480279   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.480366   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.480555   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.480617   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0829 18:06:56.480802   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.480928   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.480946   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.481087   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.481111   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.481293   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.481700   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.481719   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.481740   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.481751   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.482059   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I0829 18:06:56.482184   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.482473   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.482798   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.482822   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.482948   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.482978   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.483112   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.483588   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.483605   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.485285   21003 addons.go:234] Setting addon default-storageclass=true in "addons-647117"
	I0829 18:06:56.485323   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.485708   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.485742   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.485941   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0829 18:06:56.485968   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.486037   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43873
	I0829 18:06:56.486453   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.486581   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.486798   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.486833   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.487055   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.487069   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.487187   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.487201   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.487491   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.487517   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.487987   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.488025   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.488059   21003 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:06:56.488507   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.488534   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.488746   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.489095   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.489117   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.490168   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.490301   21003 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:06:56.491450   21003 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:06:56.491467   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:06:56.491485   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.492948   21003 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-647117"
	I0829 18:06:56.492988   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.493330   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.493369   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.496719   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.497204   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.497226   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.498188   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36927
	I0829 18:06:56.498268   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.498509   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.498603   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.498650   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.498793   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.499537   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.499570   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.499902   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.500440   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.500481   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.501294   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46335
	I0829 18:06:56.502049   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.502504   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.502535   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.503107   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.503657   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.503701   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.507276   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44693
	I0829 18:06:56.507768   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.508382   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.508406   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.508722   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.508861   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.510677   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.512639   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:06:56.513776   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:06:56.513797   21003 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:06:56.513817   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.515319   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37999
	I0829 18:06:56.515800   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.516786   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.516805   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.516856   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.517214   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.517235   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.517370   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.517505   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.517553   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.517600   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.517708   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.518168   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.518208   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.532347   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0829 18:06:56.532894   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0829 18:06:56.533030   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.533414   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.533591   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.533603   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.534067   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.534409   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.534422   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.534514   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.534861   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.535226   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.535924   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38505
	I0829 18:06:56.536353   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.536420   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35675
	I0829 18:06:56.536755   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.536837   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0829 18:06:56.537295   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.537312   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.537384   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.537694   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.537869   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.538075   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.538716   21003 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:06:56.538773   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.538789   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.538859   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I0829 18:06:56.539014   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.539114   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0829 18:06:56.539308   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.539327   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.539346   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.539533   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.539598   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.539646   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.540006   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.540014   21003 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:56.540022   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.540028   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:06:56.540045   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.540163   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.540232   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.540650   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.541057   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.541096   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.541262   21003 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:06:56.541638   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.541311   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.540506   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.541936   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.541939   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.541995   21003 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:06:56.543193   21003 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:56.543211   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:06:56.543229   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.544013   21003 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:56.544028   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:06:56.544045   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.545403   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.545625   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.545907   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.546106   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.546226   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.546589   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.546667   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.546715   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.547188   21003 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:06:56.547565   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.548163   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.547666   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:56.548188   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:06:56.547970   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.548506   21003 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:06:56.548516   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:06:56.548518   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:06:56.548537   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.548541   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:56.548548   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:56.548556   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:56.548563   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:06:56.548753   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.548823   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.548937   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.549134   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.549334   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:06:56.549403   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.549468   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0829 18:06:56.549564   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.549609   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.549623   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.549772   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.549834   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.549914   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.549974   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.550110   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.550260   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.550571   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:06:56.550571   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:56.550591   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	W0829 18:06:56.550660   21003 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0829 18:06:56.550690   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.550703   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.551269   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.551508   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.552601   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:06:56.552711   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.552948   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.553349   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.553376   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.553418   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.553567   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.553722   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.553833   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.554958   21003 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:06:56.554967   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:06:56.556064   21003 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:06:56.556082   21003 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:06:56.556101   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.556540   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0829 18:06:56.557101   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.557246   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:06:56.557716   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.557731   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.558069   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.558265   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.559622   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.559739   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:06:56.560081   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.560099   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.560311   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.560461   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.560522   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42051
	I0829 18:06:56.560720   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38711
	I0829 18:06:56.560690   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.560989   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.561397   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0829 18:06:56.561537   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.561727   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:06:56.561802   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.561893   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.562018   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.562038   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.562455   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0829 18:06:56.562581   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.562586   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.562691   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.562761   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.563130   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.563148   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.563265   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.563283   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.563450   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.563577   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:06:56.563731   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.563743   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.563805   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.564012   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.564052   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42963
	I0829 18:06:56.564704   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.564786   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.564795   21003 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:06:56.565163   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.565201   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.565775   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.565872   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:06:56.565953   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:06:56.565966   21003 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:06:56.565982   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.565984   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.566000   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.566529   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.566553   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.566600   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.566876   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:06:56.566891   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:06:56.566913   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.566921   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.567522   21003 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:06:56.568498   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.568666   21003 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:06:56.568680   21003 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:06:56.568693   21003 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 18:06:56.568712   21003 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:06:56.568697   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.569831   21003 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:06:56.569913   21003 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:56.569926   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:06:56.569945   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.570902   21003 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:06:56.571368   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.571392   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.571846   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.571869   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.571947   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.571967   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.572003   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.572159   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.572233   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.572258   21003 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:56.572364   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.572388   21003 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:56.572399   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:06:56.572413   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.572417   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.572536   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.572741   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.572872   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.573786   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.573963   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.574278   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.574356   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.574444   21003 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:56.574528   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.574569   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.574785   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.574857   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.575066   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.575072   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.575270   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.575284   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.575483   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.575644   21003 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:56.575656   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:06:56.575670   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.575415   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.577142   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I0829 18:06:56.577490   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.577544   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.577856   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.577875   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.578165   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.578188   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.578358   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.578394   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.578517   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.578591   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.578730   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.578852   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.582225   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.582235   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.582242   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.582251   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.582262   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.582402   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.582415   21003 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:56.582424   21003 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:06:56.582439   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.582563   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.582717   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	W0829 18:06:56.583947   21003 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35106->192.168.39.43:22: read: connection reset by peer
	I0829 18:06:56.583981   21003 retry.go:31] will retry after 265.336769ms: ssh: handshake failed: read tcp 192.168.39.1:35106->192.168.39.43:22: read: connection reset by peer
	I0829 18:06:56.585697   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.586161   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.586192   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.586351   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.586491   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.586629   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.586736   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	W0829 18:06:56.607131   21003 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35120->192.168.39.43:22: read: connection reset by peer
	I0829 18:06:56.607153   21003 retry.go:31] will retry after 305.774806ms: ssh: handshake failed: read tcp 192.168.39.1:35120->192.168.39.43:22: read: connection reset by peer
	I0829 18:06:56.875799   21003 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:06:56.875873   21003 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:06:56.927872   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:56.928816   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:57.008376   21003 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:06:57.008396   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:06:57.014179   21003 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:06:57.014203   21003 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:06:57.027140   21003 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:06:57.027167   21003 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:06:57.043157   21003 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:06:57.043177   21003 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:06:57.070356   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:57.099182   21003 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:06:57.099201   21003 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:06:57.138825   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:06:57.138848   21003 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:06:57.151051   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:57.190016   21003 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:06:57.190037   21003 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:06:57.210335   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:06:57.210355   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:06:57.221961   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:57.270521   21003 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:57.270543   21003 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:06:57.315049   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:57.332317   21003 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:06:57.332343   21003 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:06:57.365240   21003 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:06:57.365263   21003 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:06:57.370347   21003 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:57.370362   21003 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:06:57.413086   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:06:57.413118   21003 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:06:57.414407   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:06:57.414426   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:06:57.436369   21003 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:57.436388   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:06:57.485961   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:57.524473   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:57.562208   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:57.563959   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:57.571757   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:06:57.571776   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:06:57.587934   21003 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:06:57.587954   21003 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:06:57.667126   21003 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:06:57.667154   21003 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:06:57.696933   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:06:57.696960   21003 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:06:57.697118   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:06:57.697134   21003 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:06:57.826566   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:06:57.826587   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:06:57.883248   21003 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:06:57.883276   21003 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:06:57.928373   21003 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:57.928400   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:06:57.998581   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:57.998607   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:06:58.183428   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:06:58.183455   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:06:58.241042   21003 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:06:58.241068   21003 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:06:58.256257   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:58.316439   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:58.443343   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:06:58.443364   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:06:58.445449   21003 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:06:58.445468   21003 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:06:58.660398   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:06:58.660424   21003 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:06:58.662312   21003 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.786403949s)
	I0829 18:06:58.662328   21003 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.786494537s)
	I0829 18:06:58.662342   21003 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0829 18:06:58.663018   21003 node_ready.go:35] waiting up to 6m0s for node "addons-647117" to be "Ready" ...
	I0829 18:06:58.666067   21003 node_ready.go:49] node "addons-647117" has status "Ready":"True"
	I0829 18:06:58.666084   21003 node_ready.go:38] duration metric: took 3.048985ms for node "addons-647117" to be "Ready" ...
	I0829 18:06:58.666106   21003 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:06:58.676217   21003 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:58.801455   21003 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:58.801477   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:06:58.995484   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:59.015898   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:06:59.015928   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:06:59.185715   21003 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-647117" context rescaled to 1 replicas
	I0829 18:06:59.282748   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:06:59.282771   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:06:59.559451   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:59.559475   21003 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:06:59.736185   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:07:00.724928   21003 pod_ready.go:103] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:01.060208   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.13229736s)
	I0829 18:07:01.060262   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.131426124s)
	I0829 18:07:01.060266   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060279   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060285   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060293   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060306   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.989913885s)
	I0829 18:07:01.060348   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060367   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060369   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.838385594s)
	I0829 18:07:01.060384   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060397   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060352   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.909277018s)
	I0829 18:07:01.060452   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060461   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060780   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.060786   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.060796   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.060800   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.060805   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060813   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060816   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.060836   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.060843   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.060850   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060857   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060978   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.061004   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.061014   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.061023   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.061246   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.061254   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.061263   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.061270   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.061525   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.061547   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.061554   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.061561   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.061577   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.061791   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.061812   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.061818   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.062559   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.062587   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.062611   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.062618   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.062830   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.062864   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.062872   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.063136   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.063173   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.063180   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.063261   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.063273   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.238880   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.238905   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.239324   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.239339   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.239337   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.571208   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.256119707s)
	I0829 18:07:01.571266   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.571285   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.571510   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.571527   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.571536   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.571543   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.571811   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.571832   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.571841   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.681468   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.681491   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.681800   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.681893   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.681905   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.979228   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.49321647s)
	I0829 18:07:01.979257   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.454750161s)
	I0829 18:07:01.979274   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979291   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.979292   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979305   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.979329   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.417089396s)
	I0829 18:07:01.979375   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979389   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.979660   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.979674   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.979683   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979691   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.979700   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.979728   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.979734   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.979747   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.979761   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979769   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.980006   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.980037   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.980048   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.980050   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.980086   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.980094   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.980103   21003 addons.go:475] Verifying addon registry=true in "addons-647117"
	I0829 18:07:01.980373   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.980385   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.980394   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.980402   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.981457   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.981470   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.981480   21003 addons.go:475] Verifying addon metrics-server=true in "addons-647117"
	I0829 18:07:01.982538   21003 out.go:177] * Verifying registry addon...
	I0829 18:07:01.984946   21003 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:07:02.031640   21003 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:07:02.031663   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.525184   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.000875   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.183701   21003 pod_ready.go:103] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:03.491799   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.593792   21003 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:07:03.593832   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:07:03.597360   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:07:03.597814   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:07:03.597845   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:07:03.598025   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:07:03.598268   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:07:03.598470   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:07:03.598664   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:07:03.833461   21003 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:07:03.876546   21003 addons.go:234] Setting addon gcp-auth=true in "addons-647117"
	I0829 18:07:03.876598   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:07:03.876890   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:07:03.876915   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:07:03.892569   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45021
	I0829 18:07:03.893039   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:07:03.893483   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:07:03.893502   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:07:03.893860   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:07:03.894349   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:07:03.894372   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:07:03.908630   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0829 18:07:03.909028   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:07:03.909510   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:07:03.909530   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:07:03.909878   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:07:03.910100   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:07:03.911780   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:07:03.912019   21003 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:07:03.912041   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:07:03.914511   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:07:03.914935   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:07:03.914960   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:07:03.915116   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:07:03.915301   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:07:03.915464   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:07:03.915620   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:07:04.022481   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.501297   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.735718   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.17172825s)
	I0829 18:07:04.735757   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.735766   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.735865   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.479566427s)
	W0829 18:07:04.735914   21003 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:07:04.735926   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.419451964s)
	I0829 18:07:04.735958   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.735981   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.735976   21003 retry.go:31] will retry after 229.112003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:07:04.736053   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736066   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736077   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.736085   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.736150   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.740634409s)
	I0829 18:07:04.736182   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736194   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736197   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.736211   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.736215   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.736300   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:04.736221   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.736347   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736362   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736373   21003 addons.go:475] Verifying addon ingress=true in "addons-647117"
	I0829 18:07:04.736675   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:04.736697   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:04.736704   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736712   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736800   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736819   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736832   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.736840   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.737121   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:04.737148   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.737155   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.739047   21003 out.go:177] * Verifying ingress addon...
	I0829 18:07:04.739055   21003 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-647117 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:07:04.741307   21003 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:07:04.745091   21003 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:07:04.745106   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:04.965918   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:07:04.987862   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.250313   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.502670   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.726015   21003 pod_ready.go:103] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:05.763615   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.799116   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.062879943s)
	I0829 18:07:05.799136   21003 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.88709264s)
	I0829 18:07:05.799162   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:05.799177   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:05.799451   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:05.799474   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:05.799484   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:05.799493   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:05.799497   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:05.799758   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:05.799780   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:05.799790   21003 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-647117"
	I0829 18:07:05.799799   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:05.800504   21003 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:07:05.801286   21003 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:07:05.802603   21003 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:07:05.803538   21003 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:07:05.803551   21003 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:07:05.803578   21003 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:07:05.837611   21003 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:07:05.837635   21003 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:07:05.856926   21003 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:07:05.856951   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.886792   21003 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:07:05.886814   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:07:05.934598   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:07:06.250813   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.251110   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.348403   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.488440   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.745795   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.807735   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.996848   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.105783   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.139806103s)
	I0829 18:07:07.105829   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:07.105845   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:07.106137   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:07.107594   21003 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0829 18:07:07.107610   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:07.107623   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:07.107632   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:07.107958   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:07.107976   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:07.212977   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.278337274s)
	I0829 18:07:07.213038   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:07.213058   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:07.213352   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:07.213372   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:07.213383   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:07.213390   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:07.213624   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:07.213654   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:07.213671   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:07.215310   21003 addons.go:475] Verifying addon gcp-auth=true in "addons-647117"
	I0829 18:07:07.217287   21003 out.go:177] * Verifying gcp-auth addon...
	I0829 18:07:07.219398   21003 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:07:07.246816   21003 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:07:07.246836   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:07.309709   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.311474   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.490556   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.723447   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:07.746060   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.808691   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.989564   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.182573   21003 pod_ready.go:103] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:08.222445   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:08.245717   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.308826   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.489048   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.723297   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:08.745592   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.808123   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.989930   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.185160   21003 pod_ready.go:98] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:07:08 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.43 HostIPs:[{IP:192.168.39.
43}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-29 18:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-29 18:07:01 +0000 UTC,FinishedAt:2024-08-29 18:07:06 +0000 UTC,ContainerID:cri-o://d59fbec3165e2f0968560f98561b26d4cbd8d5aed4f3902715da50a6f73f1485,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://d59fbec3165e2f0968560f98561b26d4cbd8d5aed4f3902715da50a6f73f1485 Started:0xc0027c21b0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002646e00} {Name:kube-api-access-fc2r9 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002646e10}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:07:09.185196   21003 pod_ready.go:82] duration metric: took 10.508944074s for pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace to be "Ready" ...
	E0829 18:07:09.185208   21003 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:07:08 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.43 HostIPs:[{IP:192.168.39.43}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-29 18:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-29 18:07:01 +0000 UTC,FinishedAt:2024-08-29 18:07:06 +0000 UTC,ContainerID:cri-o://d59fbec3165e2f0968560f98561b26d4cbd8d5aed4f3902715da50a6f73f1485,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://d59fbec3165e2f0968560f98561b26d4cbd8d5aed4f3902715da50a6f73f1485 Started:0xc0027c21b0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002646e00} {Name:kube-api-access-fc2r9 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc002646e10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:07:09.185217   21003 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nhhtz" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.192464   21003 pod_ready.go:93] pod "coredns-6f6b679f8f-nhhtz" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.192485   21003 pod_ready.go:82] duration metric: took 7.259302ms for pod "coredns-6f6b679f8f-nhhtz" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.192494   21003 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.198684   21003 pod_ready.go:93] pod "etcd-addons-647117" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.198704   21003 pod_ready.go:82] duration metric: took 6.204777ms for pod "etcd-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.198713   21003 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.203256   21003 pod_ready.go:93] pod "kube-apiserver-addons-647117" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.203273   21003 pod_ready.go:82] duration metric: took 4.55494ms for pod "kube-apiserver-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.203282   21003 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.207437   21003 pod_ready.go:93] pod "kube-controller-manager-addons-647117" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.207455   21003 pod_ready.go:82] duration metric: took 4.167044ms for pod "kube-controller-manager-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.207464   21003 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dptz4" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.223722   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:09.326499   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.326509   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.489972   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.580220   21003 pod_ready.go:93] pod "kube-proxy-dptz4" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.580245   21003 pod_ready.go:82] duration metric: took 372.774467ms for pod "kube-proxy-dptz4" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.580257   21003 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.726036   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:09.745103   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.808109   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.980305   21003 pod_ready.go:93] pod "kube-scheduler-addons-647117" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.980340   21003 pod_ready.go:82] duration metric: took 400.073461ms for pod "kube-scheduler-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.980352   21003 pod_ready.go:39] duration metric: took 11.314232535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:07:09.980374   21003 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:07:09.980445   21003 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:07:09.988253   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:10.029423   21003 api_server.go:72] duration metric: took 13.613993413s to wait for apiserver process to appear ...
	I0829 18:07:10.029447   21003 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:07:10.029482   21003 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0829 18:07:10.033725   21003 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0829 18:07:10.034999   21003 api_server.go:141] control plane version: v1.31.0
	I0829 18:07:10.035018   21003 api_server.go:131] duration metric: took 5.56499ms to wait for apiserver health ...
	I0829 18:07:10.035026   21003 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:07:10.188946   21003 system_pods.go:59] 18 kube-system pods found
	I0829 18:07:10.188982   21003 system_pods.go:61] "coredns-6f6b679f8f-nhhtz" [bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2] Running
	I0829 18:07:10.188990   21003 system_pods.go:61] "csi-hostpath-attacher-0" [442c8a1e-b851-4b2f-a39a-da8738074897] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:07:10.188996   21003 system_pods.go:61] "csi-hostpath-resizer-0" [fb7dfca7-b2eb-492b-934b-81a33c34709a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:07:10.189004   21003 system_pods.go:61] "csi-hostpathplugin-b2xkq" [e62b7174-47eb-4ff8-a1db-76f9936a924d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:07:10.189009   21003 system_pods.go:61] "etcd-addons-647117" [9f96c1c2-351b-4af4-9c9c-89ed5623670f] Running
	I0829 18:07:10.189013   21003 system_pods.go:61] "kube-apiserver-addons-647117" [035080d0-8ea6-4d22-9861-28b1129fdabb] Running
	I0829 18:07:10.189017   21003 system_pods.go:61] "kube-controller-manager-addons-647117" [937119a3-ad43-498c-8a11-10919cd3cf8c] Running
	I0829 18:07:10.189024   21003 system_pods.go:61] "kube-ingress-dns-minikube" [a9a425c2-2fd3-4e62-be25-f26a8f87ddd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 18:07:10.189030   21003 system_pods.go:61] "kube-proxy-dptz4" [9a386c43-bd19-4ba5-a2be-6c0019adeedd] Running
	I0829 18:07:10.189035   21003 system_pods.go:61] "kube-scheduler-addons-647117" [159e6309-ac85-43f4-9c40-f6bf4ccb7035] Running
	I0829 18:07:10.189042   21003 system_pods.go:61] "metrics-server-8988944d9-9pvr6" [3d5398d7-70c3-47b5-8cb8-da262a7c5736] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:07:10.189050   21003 system_pods.go:61] "nvidia-device-plugin-daemonset-dlhxf" [ed192022-4f02-4de0-98b0-3c54ba3a49e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 18:07:10.189060   21003 system_pods.go:61] "registry-6fb4cdfc84-25kkf" [cc4a9ea4-4575-4df4-a260-191792ddc309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 18:07:10.189068   21003 system_pods.go:61] "registry-proxy-xqhqg" [dae462a3-dc8d-436d-8360-ee8d164ab845] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:07:10.189079   21003 system_pods.go:61] "snapshot-controller-56fcc65765-kgrh6" [1f305fc4-1a8a-47d0-bb41-7c8f77b1459c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.189085   21003 system_pods.go:61] "snapshot-controller-56fcc65765-kpgzh" [62b317a2-39aa-4da5-a04b-a97a0c67f06e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.189090   21003 system_pods.go:61] "storage-provisioner" [abb10014-4a67-4ddf-ba6b-89598283be68] Running
	I0829 18:07:10.189099   21003 system_pods.go:61] "tiller-deploy-b48cc5f79-bz7cs" [29de8757-9c38-4526-a266-586cd80d8d3b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0829 18:07:10.189105   21003 system_pods.go:74] duration metric: took 154.074157ms to wait for pod list to return data ...
	I0829 18:07:10.189116   21003 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:07:10.222838   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:10.247273   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.309243   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.380898   21003 default_sa.go:45] found service account: "default"
	I0829 18:07:10.380924   21003 default_sa.go:55] duration metric: took 191.802984ms for default service account to be created ...
	I0829 18:07:10.380932   21003 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:07:10.488590   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:10.584828   21003 system_pods.go:86] 18 kube-system pods found
	I0829 18:07:10.584854   21003 system_pods.go:89] "coredns-6f6b679f8f-nhhtz" [bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2] Running
	I0829 18:07:10.584864   21003 system_pods.go:89] "csi-hostpath-attacher-0" [442c8a1e-b851-4b2f-a39a-da8738074897] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:07:10.584871   21003 system_pods.go:89] "csi-hostpath-resizer-0" [fb7dfca7-b2eb-492b-934b-81a33c34709a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:07:10.584878   21003 system_pods.go:89] "csi-hostpathplugin-b2xkq" [e62b7174-47eb-4ff8-a1db-76f9936a924d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:07:10.584883   21003 system_pods.go:89] "etcd-addons-647117" [9f96c1c2-351b-4af4-9c9c-89ed5623670f] Running
	I0829 18:07:10.584888   21003 system_pods.go:89] "kube-apiserver-addons-647117" [035080d0-8ea6-4d22-9861-28b1129fdabb] Running
	I0829 18:07:10.584893   21003 system_pods.go:89] "kube-controller-manager-addons-647117" [937119a3-ad43-498c-8a11-10919cd3cf8c] Running
	I0829 18:07:10.584902   21003 system_pods.go:89] "kube-ingress-dns-minikube" [a9a425c2-2fd3-4e62-be25-f26a8f87ddd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 18:07:10.584907   21003 system_pods.go:89] "kube-proxy-dptz4" [9a386c43-bd19-4ba5-a2be-6c0019adeedd] Running
	I0829 18:07:10.584913   21003 system_pods.go:89] "kube-scheduler-addons-647117" [159e6309-ac85-43f4-9c40-f6bf4ccb7035] Running
	I0829 18:07:10.584924   21003 system_pods.go:89] "metrics-server-8988944d9-9pvr6" [3d5398d7-70c3-47b5-8cb8-da262a7c5736] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:07:10.584935   21003 system_pods.go:89] "nvidia-device-plugin-daemonset-dlhxf" [ed192022-4f02-4de0-98b0-3c54ba3a49e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 18:07:10.584945   21003 system_pods.go:89] "registry-6fb4cdfc84-25kkf" [cc4a9ea4-4575-4df4-a260-191792ddc309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 18:07:10.584950   21003 system_pods.go:89] "registry-proxy-xqhqg" [dae462a3-dc8d-436d-8360-ee8d164ab845] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:07:10.584955   21003 system_pods.go:89] "snapshot-controller-56fcc65765-kgrh6" [1f305fc4-1a8a-47d0-bb41-7c8f77b1459c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.584965   21003 system_pods.go:89] "snapshot-controller-56fcc65765-kpgzh" [62b317a2-39aa-4da5-a04b-a97a0c67f06e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.584969   21003 system_pods.go:89] "storage-provisioner" [abb10014-4a67-4ddf-ba6b-89598283be68] Running
	I0829 18:07:10.584975   21003 system_pods.go:89] "tiller-deploy-b48cc5f79-bz7cs" [29de8757-9c38-4526-a266-586cd80d8d3b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0829 18:07:10.584984   21003 system_pods.go:126] duration metric: took 204.046778ms to wait for k8s-apps to be running ...
	I0829 18:07:10.584994   21003 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:07:10.585045   21003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:07:10.626258   21003 system_svc.go:56] duration metric: took 41.254313ms WaitForService to wait for kubelet
	I0829 18:07:10.626292   21003 kubeadm.go:582] duration metric: took 14.210866708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:07:10.626318   21003 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:07:10.723351   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:10.745625   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.780607   21003 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:07:10.780633   21003 node_conditions.go:123] node cpu capacity is 2
	I0829 18:07:10.780645   21003 node_conditions.go:105] duration metric: took 154.321354ms to run NodePressure ...
	I0829 18:07:10.780656   21003 start.go:241] waiting for startup goroutines ...
	I0829 18:07:10.808661   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.432004   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:11.432056   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:11.432507   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.432753   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.531343   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:11.722334   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:11.746103   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.808992   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.988778   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:12.224840   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:12.245531   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.307880   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.488647   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:12.723996   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:12.745184   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.808714   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.988428   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:13.223147   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:13.245839   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.308973   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.875413   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.875496   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:13.875555   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.875916   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:13.988310   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:14.223406   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:14.246021   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.308758   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.489231   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:14.723115   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:14.750809   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.848451   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.989629   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:15.223214   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:15.245568   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.307971   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.488573   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:15.724020   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:15.747296   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.808899   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.989134   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:16.223214   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:16.245841   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.308609   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.489231   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:16.722831   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:16.745495   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.807750   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.988112   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:17.223152   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:17.245700   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.308534   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.490053   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:17.722271   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:17.745672   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.808093   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.989536   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:18.223076   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:18.245676   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.308003   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.488710   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:18.724041   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:18.745187   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.808284   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.988906   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:19.222566   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:19.246507   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.307703   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.488524   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:19.723848   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:19.744936   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.807986   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.989362   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:20.223136   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:20.245701   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.308166   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.488793   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:20.722701   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:20.744935   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.807920   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.989378   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:21.223255   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:21.245626   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.307716   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.488497   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:21.722746   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:21.744978   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.808369   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.989361   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:22.223301   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:22.245645   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.307754   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.488146   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:22.724753   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:22.745129   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.817804   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.989553   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:23.223526   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:23.245605   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.308356   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.488772   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:23.723300   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:23.745589   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.807597   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.988552   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:24.223387   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:24.245787   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.308121   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.489472   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:24.723639   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:24.744866   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.814322   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.989050   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:25.223626   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:25.244872   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.308113   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.489018   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:25.723187   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:25.745594   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.808380   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.990284   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:26.223467   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:26.246478   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.311430   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.489100   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:26.723298   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:26.745982   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.808347   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.989395   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:27.223619   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:27.244802   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.308288   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.488267   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:27.723514   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:27.745730   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.807863   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.989687   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:28.223318   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:28.245983   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.308333   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.488782   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:28.722485   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:28.745638   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.808513   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.991921   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:29.222789   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:29.245435   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.308533   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:29.488400   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:29.723378   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:29.745288   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.807764   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:29.989287   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:30.223850   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:30.245679   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.307898   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:30.488344   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:30.723583   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:30.745909   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.808358   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:30.989347   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:31.223420   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:31.245676   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.308106   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:31.489548   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:31.723984   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:31.752426   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.808206   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:31.988904   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:32.222648   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:32.245333   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.307744   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:32.488573   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:32.724105   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:32.825629   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.825917   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:32.989527   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:33.223029   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:33.245355   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.308032   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:33.490376   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:33.722861   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:33.745432   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.808944   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:33.992715   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:34.223303   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:34.245804   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.308469   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:34.489113   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:34.722859   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:34.745014   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.809535   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:34.990897   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:35.223016   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:35.245393   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.307861   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:35.489500   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:35.724153   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:35.745295   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.808675   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:35.992470   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:36.224494   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:36.245850   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.308073   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:36.488905   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:36.723280   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:36.745428   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.807550   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:36.989313   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:37.223233   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:37.246873   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.309007   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:37.489533   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:37.723538   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:37.745569   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.809432   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:37.989055   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:38.223047   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:38.245660   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.308142   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:38.488344   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:38.723366   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:38.745351   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.808393   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:38.988503   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:39.223854   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:39.245533   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.307984   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:39.488928   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:39.722252   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:39.746300   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.808576   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:39.989080   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:40.223015   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:40.245885   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.324651   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:40.489080   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:40.722990   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:40.745516   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.808575   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:40.988689   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:41.223013   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:41.245430   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:41.308188   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:41.489125   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:41.723598   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:41.744926   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:41.808306   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:41.989614   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:42.224132   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:42.245427   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:42.307702   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:42.489328   21003 kapi.go:107] duration metric: took 40.504379034s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:07:42.723558   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:42.745851   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:42.808681   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:43.497177   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:43.497724   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:43.497761   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:43.722981   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:43.745692   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:43.807475   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.222828   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:44.245874   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:44.325234   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.723309   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:44.745739   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:44.807721   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:45.223946   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:45.245318   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:45.309088   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:45.723267   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:45.745838   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:45.808262   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:46.223279   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:46.245972   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.308455   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:46.722988   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:46.745976   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.808159   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:47.223759   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:47.245074   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:47.308591   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:47.723579   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:47.746171   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:47.808847   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:48.223841   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:48.245152   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:48.309348   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:48.722985   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:48.745588   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:48.808431   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:49.223107   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:49.245680   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:49.308240   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:49.723337   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:49.745413   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:49.807755   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:50.223677   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:50.245190   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:50.308677   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:50.723917   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:50.745139   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:50.808544   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:51.223080   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:51.245425   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:51.308106   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:51.723688   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:51.746081   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:51.808225   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:52.223806   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:52.326377   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:52.327351   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.725059   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:52.826530   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.826759   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:53.228476   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:53.245760   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:53.309747   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:53.722617   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:53.746004   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:53.808430   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:54.517283   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:54.517839   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:54.518018   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:54.723061   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:54.746186   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:54.811981   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:55.222608   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:55.246316   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:55.308886   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:55.722235   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:55.745334   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.019434   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:56.223858   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:56.245409   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.307995   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:56.722626   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:56.745974   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.808140   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:57.223268   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:57.256102   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:57.308364   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:57.726325   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:57.745974   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:57.808877   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:58.223559   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:58.246847   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:58.312157   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:58.727333   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:58.746318   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:58.808148   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:59.222345   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:59.245913   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:59.307531   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:59.722489   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:59.745604   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:59.807676   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:00.271245   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:00.272539   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:00.308316   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:00.723754   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:00.745187   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:00.807594   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:01.223141   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:01.245994   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:01.308389   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:01.723190   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:01.745545   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:01.807926   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:02.570569   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:02.571356   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:02.571633   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:02.724397   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:02.747272   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:02.826148   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:03.223815   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:03.246608   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:03.307864   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:03.726393   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:03.828835   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:03.828904   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:04.223011   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:04.245511   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:04.308195   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:04.723188   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:04.745550   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:04.807502   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:05.223443   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:05.246051   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:05.308712   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:05.723117   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:05.745574   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:05.808834   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:06.226761   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:06.245664   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:06.307618   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:06.725180   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:06.748981   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:06.808801   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:07.226928   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:07.245835   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:07.308980   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:07.722723   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:07.745324   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:07.807345   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:08.223879   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:08.325379   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:08.325434   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:08.725790   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:08.744949   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:08.826386   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:09.223279   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:09.246040   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:09.308012   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:09.723363   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:09.746259   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:09.809000   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:10.222946   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:10.252397   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:10.326511   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:10.726046   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:10.746259   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:10.809839   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:11.223348   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:11.246062   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:11.309338   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:11.728846   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:11.749115   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:11.809623   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:12.225216   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:12.246889   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:12.308657   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:12.724225   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:12.746449   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:12.809246   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:13.224804   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:13.247079   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:13.325658   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:13.723793   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:13.745266   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:13.807779   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:14.222598   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:14.244733   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:14.308124   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:14.728165   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:14.746139   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:14.808642   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:15.223457   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:15.246721   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:15.308556   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:15.933232   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:15.936608   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:15.936821   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:16.223056   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:16.245394   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:16.307894   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:16.722613   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:16.745393   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:16.808036   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:17.224002   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:17.245283   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:17.327819   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:17.725793   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:17.744806   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:17.808170   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:18.227738   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:18.245282   21003 kapi.go:107] duration metric: took 1m13.503976561s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:08:18.329111   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:18.787939   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:18.807754   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:19.222198   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:19.308444   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:19.723855   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:19.808045   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:20.222926   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:20.307854   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:20.723764   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:20.826135   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:21.222994   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:21.307673   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:21.722977   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:21.807653   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:22.432663   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:22.432991   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:22.723932   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:22.825185   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:23.226536   21003 kapi.go:107] duration metric: took 1m16.007133625s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:08:23.228553   21003 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-647117 cluster.
	I0829 18:08:23.229841   21003 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:08:23.231235   21003 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:08:23.309308   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:23.809205   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:24.309098   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:24.808683   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:25.307456   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:25.810519   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:26.308581   21003 kapi.go:107] duration metric: took 1m20.505001944s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:08:26.310411   21003 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, default-storageclass, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0829 18:08:26.311643   21003 addons.go:510] duration metric: took 1m29.89618082s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns default-storageclass storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0829 18:08:26.311695   21003 start.go:246] waiting for cluster config update ...
	I0829 18:08:26.311717   21003 start.go:255] writing updated cluster config ...
	I0829 18:08:26.311981   21003 ssh_runner.go:195] Run: rm -f paused
	I0829 18:08:26.363273   21003 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:08:26.365265   21003 out.go:177] * Done! kubectl is now configured to use "addons-647117" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.515174022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955613515137981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5f31411-b4a2-4e8e-95e7-5334ebdf2908 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.516019071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ac26c3e-73e3-466c-b52d-429478981b09 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.516108517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ac26c3e-73e3-466c-b52d-429478981b09 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.516780451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6bc6a10e76549f7d587555f7a54d16837d032d8ecf00ee7b48f618079af9c28,PodSandboxId:eed98502443b793d9be197b100a8dcb16a0e902d37479157d0015b4ac1ff4d64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724955606109086410,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-q67c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17c41e7b-a4ec-4663-bdf0-b1b2832a432d,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc896fe7c39afeb26e09ebb76903fe8afb90db62961017e3270183ffdcb6722,PodSandboxId:6576b025b47bc783db4d350110b96229032b311f33b80ad55a34aac8f689c1f6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724955466642415420,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5146adcd-04b5-44c5-bbda-6d831cc2420c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4f5014c540fcbb54a865fe8a20b52c6d717391d2293533251392ffe4eb0c489,PodSandboxId:9876705b70ba7858bc4fe710ca59be141bb47029ff0aff7d766c732718d0457a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1724955455292267013,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-jmjhc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b,PodSandboxId:56c18ca1bdb712fc2d36e56135d155949699995097b1d9b0845ba2e05c267bc2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724954902493807072,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j924p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 136c3da7-f196-4a84-9c08-9186bd2f8698,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f617161977681299f053c902914987ca27a5748a56e02e0350d8ba6218ed00e,PodSandboxId:73676cda05f2367b40f3a0c294fe814922e46484ee10d76931d7f9e16c3e8db0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1724954880515432885,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tg7nb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52315a01-2d0e-4db7-9560-48dc7a163f0b,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f40717dc5b978d3752fff6733f2948ce86bbab32c8d036e2bd5a34fa2553c0,PodSandboxId:6ed81ce9469c299ccbbeea991ac553c96eaff7aefcfaa0ab6f012f5bb2f8a005,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724954880370356967,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qkkdh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: af486128-f893-40e3-99de-17a3336cfaeb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b634523ff8d16508d32c6c9e41494a534b970bb9c92b6c6137a7c0bf1b7a95e,PodSandboxId:55d4a995519c03224e78b3aae7fb2a4e283b8e4f033c404157b6ad5ccaeae0c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a2
4c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724954854895903728,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9pvr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5398d7-70c3-47b5-8cb8-da262a7c5736,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747,PodSandboxId:d2641f267147c021f101934a92f06ea44da3305073b26cf0734e3e5ee6c070d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724954822396476667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb10014-4a67-4ddf-ba6b-89598283be68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c,PodSandboxId:29673979fe79f5b16a977fc33807a5f1f34e8ff57d6f4fcdcf501ea98ff5d77d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724954819708824056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nhhtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238,PodSandboxId:ca373cf48871db0ffa7cf5f1
4c758639c6b5004bf3e7803ff9b67d34d91ebc78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724954817536689832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dptz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a386c43-bd19-4ba5-a2be-6c0019adeedd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d,PodSandboxId:f1139b5439166add471956c552e948059493ccc23613d6b5ff9a9e9525
0c645d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724954805726750642,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69b030b129e3e87a334b1ddda886bebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef,PodSandboxId:63b0cbde37a9d3c11e177c2407a492016edf721ee0311864
75db9a6d433d914b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724954805708156025,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8dd63dd4052ba5509ca7e97f4edf66,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b,PodSandboxId:2b4c41aeae94044ae15f32459124342629d709dbfaecad1ac3dc578913c860d6,
Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724954805688104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17188c36fd60880dcc736dbc36b3343b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7,PodSandboxId:905af1fd51ac949e666ce8db448b384f88bb0d208bd01e6b2bcd34dcfbb88535,Metadata:&Contain
erMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724954805668036808,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348c904ca755cb9b42a3678406195788,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ac26c3e-73e3-466c-b52d-429478981b09 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.557111159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ee8de7b-92ae-4f0e-9d96-85733c88d888 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.557199024Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ee8de7b-92ae-4f0e-9d96-85733c88d888 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.558472434Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f988194-8a00-465f-9003-9cf03c992b2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.559608748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955613559580809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f988194-8a00-465f-9003-9cf03c992b2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.560039142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e2b89f7-51ec-4e3a-93c9-e70f61f50f58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.560105427Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e2b89f7-51ec-4e3a-93c9-e70f61f50f58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.560542889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6bc6a10e76549f7d587555f7a54d16837d032d8ecf00ee7b48f618079af9c28,PodSandboxId:eed98502443b793d9be197b100a8dcb16a0e902d37479157d0015b4ac1ff4d64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724955606109086410,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-q67c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17c41e7b-a4ec-4663-bdf0-b1b2832a432d,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc896fe7c39afeb26e09ebb76903fe8afb90db62961017e3270183ffdcb6722,PodSandboxId:6576b025b47bc783db4d350110b96229032b311f33b80ad55a34aac8f689c1f6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724955466642415420,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5146adcd-04b5-44c5-bbda-6d831cc2420c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4f5014c540fcbb54a865fe8a20b52c6d717391d2293533251392ffe4eb0c489,PodSandboxId:9876705b70ba7858bc4fe710ca59be141bb47029ff0aff7d766c732718d0457a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1724955455292267013,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-jmjhc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b,PodSandboxId:56c18ca1bdb712fc2d36e56135d155949699995097b1d9b0845ba2e05c267bc2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724954902493807072,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j924p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 136c3da7-f196-4a84-9c08-9186bd2f8698,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f617161977681299f053c902914987ca27a5748a56e02e0350d8ba6218ed00e,PodSandboxId:73676cda05f2367b40f3a0c294fe814922e46484ee10d76931d7f9e16c3e8db0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1724954880515432885,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tg7nb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52315a01-2d0e-4db7-9560-48dc7a163f0b,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f40717dc5b978d3752fff6733f2948ce86bbab32c8d036e2bd5a34fa2553c0,PodSandboxId:6ed81ce9469c299ccbbeea991ac553c96eaff7aefcfaa0ab6f012f5bb2f8a005,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724954880370356967,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qkkdh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: af486128-f893-40e3-99de-17a3336cfaeb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b634523ff8d16508d32c6c9e41494a534b970bb9c92b6c6137a7c0bf1b7a95e,PodSandboxId:55d4a995519c03224e78b3aae7fb2a4e283b8e4f033c404157b6ad5ccaeae0c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a2
4c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724954854895903728,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9pvr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5398d7-70c3-47b5-8cb8-da262a7c5736,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747,PodSandboxId:d2641f267147c021f101934a92f06ea44da3305073b26cf0734e3e5ee6c070d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724954822396476667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb10014-4a67-4ddf-ba6b-89598283be68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c,PodSandboxId:29673979fe79f5b16a977fc33807a5f1f34e8ff57d6f4fcdcf501ea98ff5d77d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724954819708824056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nhhtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238,PodSandboxId:ca373cf48871db0ffa7cf5f1
4c758639c6b5004bf3e7803ff9b67d34d91ebc78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724954817536689832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dptz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a386c43-bd19-4ba5-a2be-6c0019adeedd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d,PodSandboxId:f1139b5439166add471956c552e948059493ccc23613d6b5ff9a9e9525
0c645d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724954805726750642,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69b030b129e3e87a334b1ddda886bebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef,PodSandboxId:63b0cbde37a9d3c11e177c2407a492016edf721ee0311864
75db9a6d433d914b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724954805708156025,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8dd63dd4052ba5509ca7e97f4edf66,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b,PodSandboxId:2b4c41aeae94044ae15f32459124342629d709dbfaecad1ac3dc578913c860d6,
Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724954805688104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17188c36fd60880dcc736dbc36b3343b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7,PodSandboxId:905af1fd51ac949e666ce8db448b384f88bb0d208bd01e6b2bcd34dcfbb88535,Metadata:&Contain
erMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724954805668036808,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348c904ca755cb9b42a3678406195788,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e2b89f7-51ec-4e3a-93c9-e70f61f50f58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.592062810Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb989b4b-7b4b-42c3-aa38-495fd28c9331 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.592154935Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb989b4b-7b4b-42c3-aa38-495fd28c9331 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.593277046Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=153f6e64-9704-4cf8-958f-f7feecfd861c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.594449068Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955613594422008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=153f6e64-9704-4cf8-958f-f7feecfd861c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.594966992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b58eb637-26f0-4e2b-905f-fc766383908e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.595021119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b58eb637-26f0-4e2b-905f-fc766383908e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.595397652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6bc6a10e76549f7d587555f7a54d16837d032d8ecf00ee7b48f618079af9c28,PodSandboxId:eed98502443b793d9be197b100a8dcb16a0e902d37479157d0015b4ac1ff4d64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724955606109086410,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-q67c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17c41e7b-a4ec-4663-bdf0-b1b2832a432d,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc896fe7c39afeb26e09ebb76903fe8afb90db62961017e3270183ffdcb6722,PodSandboxId:6576b025b47bc783db4d350110b96229032b311f33b80ad55a34aac8f689c1f6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724955466642415420,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5146adcd-04b5-44c5-bbda-6d831cc2420c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4f5014c540fcbb54a865fe8a20b52c6d717391d2293533251392ffe4eb0c489,PodSandboxId:9876705b70ba7858bc4fe710ca59be141bb47029ff0aff7d766c732718d0457a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1724955455292267013,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-jmjhc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b,PodSandboxId:56c18ca1bdb712fc2d36e56135d155949699995097b1d9b0845ba2e05c267bc2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724954902493807072,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j924p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 136c3da7-f196-4a84-9c08-9186bd2f8698,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f617161977681299f053c902914987ca27a5748a56e02e0350d8ba6218ed00e,PodSandboxId:73676cda05f2367b40f3a0c294fe814922e46484ee10d76931d7f9e16c3e8db0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1724954880515432885,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tg7nb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52315a01-2d0e-4db7-9560-48dc7a163f0b,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f40717dc5b978d3752fff6733f2948ce86bbab32c8d036e2bd5a34fa2553c0,PodSandboxId:6ed81ce9469c299ccbbeea991ac553c96eaff7aefcfaa0ab6f012f5bb2f8a005,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724954880370356967,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qkkdh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: af486128-f893-40e3-99de-17a3336cfaeb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b634523ff8d16508d32c6c9e41494a534b970bb9c92b6c6137a7c0bf1b7a95e,PodSandboxId:55d4a995519c03224e78b3aae7fb2a4e283b8e4f033c404157b6ad5ccaeae0c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a2
4c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724954854895903728,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9pvr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5398d7-70c3-47b5-8cb8-da262a7c5736,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747,PodSandboxId:d2641f267147c021f101934a92f06ea44da3305073b26cf0734e3e5ee6c070d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724954822396476667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb10014-4a67-4ddf-ba6b-89598283be68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c,PodSandboxId:29673979fe79f5b16a977fc33807a5f1f34e8ff57d6f4fcdcf501ea98ff5d77d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724954819708824056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nhhtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238,PodSandboxId:ca373cf48871db0ffa7cf5f1
4c758639c6b5004bf3e7803ff9b67d34d91ebc78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724954817536689832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dptz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a386c43-bd19-4ba5-a2be-6c0019adeedd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d,PodSandboxId:f1139b5439166add471956c552e948059493ccc23613d6b5ff9a9e9525
0c645d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724954805726750642,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69b030b129e3e87a334b1ddda886bebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef,PodSandboxId:63b0cbde37a9d3c11e177c2407a492016edf721ee0311864
75db9a6d433d914b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724954805708156025,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8dd63dd4052ba5509ca7e97f4edf66,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b,PodSandboxId:2b4c41aeae94044ae15f32459124342629d709dbfaecad1ac3dc578913c860d6,
Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724954805688104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17188c36fd60880dcc736dbc36b3343b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7,PodSandboxId:905af1fd51ac949e666ce8db448b384f88bb0d208bd01e6b2bcd34dcfbb88535,Metadata:&Contain
erMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724954805668036808,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348c904ca755cb9b42a3678406195788,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b58eb637-26f0-4e2b-905f-fc766383908e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.639688705Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff949aa4-f251-4c43-b75e-1b587a032abe name=/runtime.v1.RuntimeService/Version
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.639766673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff949aa4-f251-4c43-b75e-1b587a032abe name=/runtime.v1.RuntimeService/Version
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.641438258Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b0a02fa-65ce-4e12-b303-dd20bc4c7e8a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.642601375Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955613642573740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b0a02fa-65ce-4e12-b303-dd20bc4c7e8a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.643173828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6929879c-aae0-46ba-98c3-5666106372ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.643245758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6929879c-aae0-46ba-98c3-5666106372ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:20:13 addons-647117 crio[663]: time="2024-08-29 18:20:13.643677865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6bc6a10e76549f7d587555f7a54d16837d032d8ecf00ee7b48f618079af9c28,PodSandboxId:eed98502443b793d9be197b100a8dcb16a0e902d37479157d0015b4ac1ff4d64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724955606109086410,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-q67c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17c41e7b-a4ec-4663-bdf0-b1b2832a432d,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc896fe7c39afeb26e09ebb76903fe8afb90db62961017e3270183ffdcb6722,PodSandboxId:6576b025b47bc783db4d350110b96229032b311f33b80ad55a34aac8f689c1f6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724955466642415420,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5146adcd-04b5-44c5-bbda-6d831cc2420c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4f5014c540fcbb54a865fe8a20b52c6d717391d2293533251392ffe4eb0c489,PodSandboxId:9876705b70ba7858bc4fe710ca59be141bb47029ff0aff7d766c732718d0457a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1724955455292267013,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-jmjhc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b,PodSandboxId:56c18ca1bdb712fc2d36e56135d155949699995097b1d9b0845ba2e05c267bc2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724954902493807072,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j924p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 136c3da7-f196-4a84-9c08-9186bd2f8698,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f617161977681299f053c902914987ca27a5748a56e02e0350d8ba6218ed00e,PodSandboxId:73676cda05f2367b40f3a0c294fe814922e46484ee10d76931d7f9e16c3e8db0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINE
R_EXITED,CreatedAt:1724954880515432885,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tg7nb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52315a01-2d0e-4db7-9560-48dc7a163f0b,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f40717dc5b978d3752fff6733f2948ce86bbab32c8d036e2bd5a34fa2553c0,PodSandboxId:6ed81ce9469c299ccbbeea991ac553c96eaff7aefcfaa0ab6f012f5bb2f8a005,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac
90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724954880370356967,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-qkkdh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: af486128-f893-40e3-99de-17a3336cfaeb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b634523ff8d16508d32c6c9e41494a534b970bb9c92b6c6137a7c0bf1b7a95e,PodSandboxId:55d4a995519c03224e78b3aae7fb2a4e283b8e4f033c404157b6ad5ccaeae0c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a2
4c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724954854895903728,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9pvr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5398d7-70c3-47b5-8cb8-da262a7c5736,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747,PodSandboxId:d2641f267147c021f101934a92f06ea44da3305073b26cf0734e3e5ee6c070d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbd
a1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724954822396476667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb10014-4a67-4ddf-ba6b-89598283be68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c,PodSandboxId:29673979fe79f5b16a977fc33807a5f1f34e8ff57d6f4fcdcf501ea98ff5d77d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724954819708824056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nhhtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238,PodSandboxId:ca373cf48871db0ffa7cf5f1
4c758639c6b5004bf3e7803ff9b67d34d91ebc78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724954817536689832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dptz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a386c43-bd19-4ba5-a2be-6c0019adeedd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d,PodSandboxId:f1139b5439166add471956c552e948059493ccc23613d6b5ff9a9e9525
0c645d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724954805726750642,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69b030b129e3e87a334b1ddda886bebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef,PodSandboxId:63b0cbde37a9d3c11e177c2407a492016edf721ee0311864
75db9a6d433d914b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724954805708156025,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8dd63dd4052ba5509ca7e97f4edf66,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b,PodSandboxId:2b4c41aeae94044ae15f32459124342629d709dbfaecad1ac3dc578913c860d6,
Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724954805688104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17188c36fd60880dcc736dbc36b3343b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7,PodSandboxId:905af1fd51ac949e666ce8db448b384f88bb0d208bd01e6b2bcd34dcfbb88535,Metadata:&Contain
erMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724954805668036808,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348c904ca755cb9b42a3678406195788,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6929879c-aae0-46ba-98c3-5666106372ae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6bc6a10e7654       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   eed98502443b7       hello-world-app-55bf9c44b4-q67c7
	dcc896fe7c39a       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   6576b025b47bc       nginx
	c4f5014c540fc       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        2 minutes ago       Running             headlamp                  0                   9876705b70ba7       headlamp-57fb76fcdb-jmjhc
	a814d0a183682       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   56c18ca1bdb71       gcp-auth-89d5ffd79-j924p
	4f61716197768       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              patch                     0                   73676cda05f23       ingress-nginx-admission-patch-tg7nb
	62f40717dc5b9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   6ed81ce9469c2       ingress-nginx-admission-create-qkkdh
	0b634523ff8d1       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        12 minutes ago      Running             metrics-server            0                   55d4a995519c0       metrics-server-8988944d9-9pvr6
	c7d6293cd5ae5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             13 minutes ago      Running             storage-provisioner       0                   d2641f267147c       storage-provisioner
	43c5285b49b2b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             13 minutes ago      Running             coredns                   0                   29673979fe79f       coredns-6f6b679f8f-nhhtz
	20d8d4b2a5b99       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             13 minutes ago      Running             kube-proxy                0                   ca373cf48871d       kube-proxy-dptz4
	7109054cd9285       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             13 minutes ago      Running             kube-controller-manager   0                   f1139b5439166       kube-controller-manager-addons-647117
	3bbe72bf43966       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             13 minutes ago      Running             kube-scheduler            0                   63b0cbde37a9d       kube-scheduler-addons-647117
	e4037213915cc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             13 minutes ago      Running             kube-apiserver            0                   2b4c41aeae940       kube-apiserver-addons-647117
	ad53629527269       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   905af1fd51ac9       etcd-addons-647117
	
	
	==> coredns [43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c] <==
	[INFO] 127.0.0.1:40023 - 21501 "HINFO IN 2107751163851146271.7937220302157701423. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011076414s
	[INFO] 10.244.0.7:57388 - 3898 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00041242s
	[INFO] 10.244.0.7:57388 - 35385 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160164s
	[INFO] 10.244.0.7:42181 - 16646 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000102891s
	[INFO] 10.244.0.7:42181 - 61211 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000143215s
	[INFO] 10.244.0.7:40451 - 5822 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096496s
	[INFO] 10.244.0.7:40451 - 10428 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000151048s
	[INFO] 10.244.0.7:50345 - 34777 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108547s
	[INFO] 10.244.0.7:50345 - 62175 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000123168s
	[INFO] 10.244.0.7:43363 - 59112 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00011089s
	[INFO] 10.244.0.7:43363 - 38637 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084266s
	[INFO] 10.244.0.7:43570 - 27914 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066159s
	[INFO] 10.244.0.7:43570 - 8968 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006745s
	[INFO] 10.244.0.7:51342 - 48058 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034576s
	[INFO] 10.244.0.7:51342 - 50108 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080216s
	[INFO] 10.244.0.7:55526 - 58103 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080655s
	[INFO] 10.244.0.7:55526 - 43765 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000491s
	[INFO] 10.244.0.22:59665 - 61483 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00046118s
	[INFO] 10.244.0.22:56522 - 61414 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001110678s
	[INFO] 10.244.0.22:56188 - 1457 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155671s
	[INFO] 10.244.0.22:42917 - 2402 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000399062s
	[INFO] 10.244.0.22:48780 - 50292 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000158469s
	[INFO] 10.244.0.22:43403 - 21131 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000069692s
	[INFO] 10.244.0.22:59530 - 50990 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001145169s
	[INFO] 10.244.0.22:57789 - 7865 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001496446s
	
	
	==> describe nodes <==
	Name:               addons-647117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-647117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=addons-647117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_06_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-647117
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:06:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-647117
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:20:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:17:55 +0000   Thu, 29 Aug 2024 18:06:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:17:55 +0000   Thu, 29 Aug 2024 18:06:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:17:55 +0000   Thu, 29 Aug 2024 18:06:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:17:55 +0000   Thu, 29 Aug 2024 18:06:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    addons-647117
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb2784d9f1e146b3adcb56f05f7d626c
	  System UUID:                eb2784d9-f1e1-46b3-adcb-56f05f7d626c
	  Boot ID:                    e13d5250-07a7-415d-bb34-b77c87eefe5b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-q67c7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gcp-auth                    gcp-auth-89d5ffd79-j924p                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  headlamp                    headlamp-57fb76fcdb-jmjhc                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 coredns-6f6b679f8f-nhhtz                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-647117                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-647117             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-647117    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-dptz4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-647117             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-8988944d9-9pvr6           100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-647117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-647117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-647117 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node addons-647117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node addons-647117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node addons-647117 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node addons-647117 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node addons-647117 event: Registered Node addons-647117 in Controller
	
	
	==> dmesg <==
	[ +14.496686] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.231458] kauditd_printk_skb: 2 callbacks suppressed
	[Aug29 18:08] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.119850] kauditd_printk_skb: 65 callbacks suppressed
	[  +9.791316] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.274613] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.166700] kauditd_printk_skb: 51 callbacks suppressed
	[Aug29 18:09] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:11] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:13] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:16] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.960026] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.856149] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.076798] kauditd_printk_skb: 17 callbacks suppressed
	[Aug29 18:17] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.882088] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.437607] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.553101] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.346334] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.833680] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.005059] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.337864] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.949388] kauditd_printk_skb: 11 callbacks suppressed
	[Aug29 18:20] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.264042] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7] <==
	{"level":"warn","ts":"2024-08-29T18:08:15.916414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"365.890727ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-29T18:08:15.916450Z","caller":"traceutil/trace.go:171","msg":"trace[383738334] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1140; }","duration":"365.944011ms","start":"2024-08-29T18:08:15.550499Z","end":"2024-08-29T18:08:15.916443Z","steps":["trace[383738334] 'agreement among raft nodes before linearized reading'  (duration: 365.865295ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:15.916483Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:08:15.550459Z","time spent":"366.016618ms","remote":"127.0.0.1:37584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":2,"response size":30,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "}
	{"level":"info","ts":"2024-08-29T18:08:15.915571Z","caller":"traceutil/trace.go:171","msg":"trace[1194422704] linearizableReadLoop","detail":"{readStateIndex:1171; appliedIndex:1170; }","duration":"365.049318ms","start":"2024-08-29T18:08:15.550504Z","end":"2024-08-29T18:08:15.915554Z","steps":["trace[1194422704] 'read index received'  (duration: 364.868874ms)","trace[1194422704] 'applied index is now lower than readState.Index'  (duration: 180.004µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T18:08:15.916898Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.484708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:15.916956Z","caller":"traceutil/trace.go:171","msg":"trace[83720747] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"207.515173ms","start":"2024-08-29T18:08:15.709401Z","end":"2024-08-29T18:08:15.916916Z","steps":["trace[83720747] 'agreement among raft nodes before linearized reading'  (duration: 207.462966ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:15.917503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.990133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:15.917550Z","caller":"traceutil/trace.go:171","msg":"trace[1271701390] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"186.041171ms","start":"2024-08-29T18:08:15.731500Z","end":"2024-08-29T18:08:15.917541Z","steps":["trace[1271701390] 'agreement among raft nodes before linearized reading'  (duration: 185.939215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:15.917854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.129824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:15.917882Z","caller":"traceutil/trace.go:171","msg":"trace[471033133] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"124.157063ms","start":"2024-08-29T18:08:15.793714Z","end":"2024-08-29T18:08:15.917871Z","steps":["trace[471033133] 'agreement among raft nodes before linearized reading'  (duration: 124.114367ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:08:22.406730Z","caller":"traceutil/trace.go:171","msg":"trace[351282553] linearizableReadLoop","detail":"{readStateIndex:1199; appliedIndex:1198; }","duration":"197.570563ms","start":"2024-08-29T18:08:22.209145Z","end":"2024-08-29T18:08:22.406715Z","steps":["trace[351282553] 'read index received'  (duration: 197.399929ms)","trace[351282553] 'applied index is now lower than readState.Index'  (duration: 170.126µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:08:22.407082Z","caller":"traceutil/trace.go:171","msg":"trace[1670518420] transaction","detail":"{read_only:false; response_revision:1166; number_of_response:1; }","duration":"347.190393ms","start":"2024-08-29T18:08:22.059878Z","end":"2024-08-29T18:08:22.407068Z","steps":["trace[1670518420] 'process raft request'  (duration: 346.707402ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:22.407202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:08:22.059865Z","time spent":"347.274505ms","remote":"127.0.0.1:37314","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":798,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-8988944d9-9pvr6.17f0454d6b25d4e0\" mod_revision:1131 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-8988944d9-9pvr6.17f0454d6b25d4e0\" value_size:704 lease:1009247904961359277 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-8988944d9-9pvr6.17f0454d6b25d4e0\" > >"}
	{"level":"warn","ts":"2024-08-29T18:08:22.414665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.166922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:22.414738Z","caller":"traceutil/trace.go:171","msg":"trace[241417199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1166; }","duration":"121.257071ms","start":"2024-08-29T18:08:22.293470Z","end":"2024-08-29T18:08:22.414727Z","steps":["trace[241417199] 'agreement among raft nodes before linearized reading'  (duration: 113.986108ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:22.414703Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.662655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-29T18:08:22.414842Z","caller":"traceutil/trace.go:171","msg":"trace[50687533] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:1166; }","duration":"193.845523ms","start":"2024-08-29T18:08:22.220985Z","end":"2024-08-29T18:08:22.414831Z","steps":["trace[50687533] 'agreement among raft nodes before linearized reading'  (duration: 186.452075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:22.414967Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.831006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:22.415002Z","caller":"traceutil/trace.go:171","msg":"trace[16339418] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1166; }","duration":"205.868124ms","start":"2024-08-29T18:08:22.209128Z","end":"2024-08-29T18:08:22.414996Z","steps":["trace[16339418] 'agreement among raft nodes before linearized reading'  (duration: 198.323343ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:57.579149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.73227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-647117\" ","response":"range_response_count:1 size:10787"}
	{"level":"info","ts":"2024-08-29T18:08:57.579235Z","caller":"traceutil/trace.go:171","msg":"trace[263122715] range","detail":"{range_begin:/registry/minions/addons-647117; range_end:; response_count:1; response_revision:1297; }","duration":"103.837782ms","start":"2024-08-29T18:08:57.475383Z","end":"2024-08-29T18:08:57.579221Z","steps":["trace[263122715] 'range keys from in-memory index tree'  (duration: 103.559511ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:16:47.751238Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1559}
	{"level":"info","ts":"2024-08-29T18:16:47.785191Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1559,"took":"33.367177ms","hash":750415669,"current-db-size-bytes":6561792,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3682304,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2024-08-29T18:16:47.785252Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":750415669,"revision":1559,"compact-revision":-1}
	{"level":"info","ts":"2024-08-29T18:17:34.532473Z","caller":"traceutil/trace.go:171","msg":"trace[1845162260] transaction","detail":"{read_only:false; response_revision:2387; number_of_response:1; }","duration":"292.595899ms","start":"2024-08-29T18:17:34.239840Z","end":"2024-08-29T18:17:34.532436Z","steps":["trace[1845162260] 'process raft request'  (duration: 292.224026ms)"],"step_count":1}
	
	
	==> gcp-auth [a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b] <==
	2024/08/29 18:08:26 Ready to write response ...
	2024/08/29 18:16:36 Ready to marshal response ...
	2024/08/29 18:16:36 Ready to write response ...
	2024/08/29 18:16:40 Ready to marshal response ...
	2024/08/29 18:16:40 Ready to write response ...
	2024/08/29 18:16:41 Ready to marshal response ...
	2024/08/29 18:16:41 Ready to write response ...
	2024/08/29 18:16:41 Ready to marshal response ...
	2024/08/29 18:16:41 Ready to write response ...
	2024/08/29 18:16:55 Ready to marshal response ...
	2024/08/29 18:16:55 Ready to write response ...
	2024/08/29 18:17:00 Ready to marshal response ...
	2024/08/29 18:17:00 Ready to write response ...
	2024/08/29 18:17:30 Ready to marshal response ...
	2024/08/29 18:17:30 Ready to write response ...
	2024/08/29 18:17:30 Ready to marshal response ...
	2024/08/29 18:17:30 Ready to write response ...
	2024/08/29 18:17:30 Ready to marshal response ...
	2024/08/29 18:17:30 Ready to write response ...
	2024/08/29 18:17:42 Ready to marshal response ...
	2024/08/29 18:17:42 Ready to write response ...
	2024/08/29 18:17:48 Ready to marshal response ...
	2024/08/29 18:17:48 Ready to write response ...
	2024/08/29 18:20:03 Ready to marshal response ...
	2024/08/29 18:20:03 Ready to write response ...
	
	
	==> kernel <==
	 18:20:13 up 13 min,  0 users,  load average: 0.29, 0.40, 0.42
	Linux addons-647117 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0829 18:08:42.103726       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.189.204:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.189.204:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.189.204:443: connect: connection refused" logger="UnhandledError"
	I0829 18:08:42.141565       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0829 18:16:49.533111       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0829 18:17:11.753860       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0829 18:17:16.195581       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.195614       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:17:16.228724       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.228885       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:17:16.234104       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.234155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:17:16.247150       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.248440       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:17:16.358488       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.358534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 18:17:17.234989       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 18:17:17.361145       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0829 18:17:17.374275       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0829 18:17:30.375386       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.157.54"}
	I0829 18:17:42.080953       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 18:17:42.285810       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.192.244"}
	I0829 18:17:46.012916       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 18:17:47.112654       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 18:20:03.440940       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.118.57"}
	
	
	==> kube-controller-manager [7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d] <==
	E0829 18:19:04.514199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:05.911370       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:05.911421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:17.863841       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:17.863900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:25.012551       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:25.012673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:40.903039       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:40.903184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:51.011275       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:51.011366       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:54.568625       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:54.568783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:57.899295       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:57.899478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:20:03.272511       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.266716ms"
	I0829 18:20:03.282448       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.882877ms"
	I0829 18:20:03.282761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="50.414µs"
	I0829 18:20:03.283024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="19.671µs"
	I0829 18:20:03.298492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.874µs"
	I0829 18:20:05.629574       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0829 18:20:05.640585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="8.753µs"
	I0829 18:20:05.650261       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0829 18:20:07.020251       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.679583ms"
	I0829 18:20:07.020361       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="68.93µs"
	
	
	==> kube-proxy [20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:06:58.152664       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:06:58.167873       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.43"]
	E0829 18:06:58.167951       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:06:58.245676       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:06:58.245739       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:06:58.245767       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:06:58.256186       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:06:58.256510       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:06:58.256522       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:06:58.261152       1 config.go:197] "Starting service config controller"
	I0829 18:06:58.261223       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:06:58.261753       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:06:58.261762       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:06:58.262346       1 config.go:326] "Starting node config controller"
	I0829 18:06:58.262355       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:06:58.362407       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:06:58.362425       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:06:58.362435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef] <==
	W0829 18:06:48.898836       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:48.898932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.798293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:06:49.798410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.798508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:49.798538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.801096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:49.801188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.811894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:06:49.811940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.065849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 18:06:50.065949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.089891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:06:50.089949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.116438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 18:06:50.116507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.133045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:06:50.133135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.145488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:06:50.145535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.150457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:50.150555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.390065       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:06:50.390353       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 18:06:52.182506       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 18:20:03 addons-647117 kubelet[1203]: I0829 18:20:03.350801    1203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvrcf\" (UniqueName: \"kubernetes.io/projected/17c41e7b-a4ec-4663-bdf0-b1b2832a432d-kube-api-access-qvrcf\") pod \"hello-world-app-55bf9c44b4-q67c7\" (UID: \"17c41e7b-a4ec-4663-bdf0-b1b2832a432d\") " pod="default/hello-world-app-55bf9c44b4-q67c7"
	Aug 29 18:20:04 addons-647117 kubelet[1203]: I0829 18:20:04.357848    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmvsd\" (UniqueName: \"kubernetes.io/projected/a9a425c2-2fd3-4e62-be25-f26a8f87ddd1-kube-api-access-kmvsd\") pod \"a9a425c2-2fd3-4e62-be25-f26a8f87ddd1\" (UID: \"a9a425c2-2fd3-4e62-be25-f26a8f87ddd1\") "
	Aug 29 18:20:04 addons-647117 kubelet[1203]: I0829 18:20:04.361108    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a425c2-2fd3-4e62-be25-f26a8f87ddd1-kube-api-access-kmvsd" (OuterVolumeSpecName: "kube-api-access-kmvsd") pod "a9a425c2-2fd3-4e62-be25-f26a8f87ddd1" (UID: "a9a425c2-2fd3-4e62-be25-f26a8f87ddd1"). InnerVolumeSpecName "kube-api-access-kmvsd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:20:04 addons-647117 kubelet[1203]: I0829 18:20:04.458218    1203 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kmvsd\" (UniqueName: \"kubernetes.io/projected/a9a425c2-2fd3-4e62-be25-f26a8f87ddd1-kube-api-access-kmvsd\") on node \"addons-647117\" DevicePath \"\""
	Aug 29 18:20:04 addons-647117 kubelet[1203]: I0829 18:20:04.984002    1203 scope.go:117] "RemoveContainer" containerID="b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d"
	Aug 29 18:20:05 addons-647117 kubelet[1203]: I0829 18:20:05.013067    1203 scope.go:117] "RemoveContainer" containerID="b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d"
	Aug 29 18:20:05 addons-647117 kubelet[1203]: E0829 18:20:05.013616    1203 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d\": container with ID starting with b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d not found: ID does not exist" containerID="b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d"
	Aug 29 18:20:05 addons-647117 kubelet[1203]: I0829 18:20:05.013660    1203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d"} err="failed to get container status \"b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d\": rpc error: code = NotFound desc = could not find container \"b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d\": container with ID starting with b3f71af1c55306859772595683ac1f9eec3df8cc0609934afe8e77ccbcc1279d not found: ID does not exist"
	Aug 29 18:20:05 addons-647117 kubelet[1203]: I0829 18:20:05.436816    1203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a425c2-2fd3-4e62-be25-f26a8f87ddd1" path="/var/lib/kubelet/pods/a9a425c2-2fd3-4e62-be25-f26a8f87ddd1/volumes"
	Aug 29 18:20:07 addons-647117 kubelet[1203]: I0829 18:20:07.440274    1203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52315a01-2d0e-4db7-9560-48dc7a163f0b" path="/var/lib/kubelet/pods/52315a01-2d0e-4db7-9560-48dc7a163f0b/volumes"
	Aug 29 18:20:07 addons-647117 kubelet[1203]: I0829 18:20:07.442104    1203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af486128-f893-40e3-99de-17a3336cfaeb" path="/var/lib/kubelet/pods/af486128-f893-40e3-99de-17a3336cfaeb/volumes"
	Aug 29 18:20:08 addons-647117 kubelet[1203]: I0829 18:20:08.887981    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smgms\" (UniqueName: \"kubernetes.io/projected/80bd8a11-05a0-44c4-8808-ee33a6be01ec-kube-api-access-smgms\") pod \"80bd8a11-05a0-44c4-8808-ee33a6be01ec\" (UID: \"80bd8a11-05a0-44c4-8808-ee33a6be01ec\") "
	Aug 29 18:20:08 addons-647117 kubelet[1203]: I0829 18:20:08.888031    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80bd8a11-05a0-44c4-8808-ee33a6be01ec-webhook-cert\") pod \"80bd8a11-05a0-44c4-8808-ee33a6be01ec\" (UID: \"80bd8a11-05a0-44c4-8808-ee33a6be01ec\") "
	Aug 29 18:20:08 addons-647117 kubelet[1203]: I0829 18:20:08.890157    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80bd8a11-05a0-44c4-8808-ee33a6be01ec-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "80bd8a11-05a0-44c4-8808-ee33a6be01ec" (UID: "80bd8a11-05a0-44c4-8808-ee33a6be01ec"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 29 18:20:08 addons-647117 kubelet[1203]: I0829 18:20:08.891268    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80bd8a11-05a0-44c4-8808-ee33a6be01ec-kube-api-access-smgms" (OuterVolumeSpecName: "kube-api-access-smgms") pod "80bd8a11-05a0-44c4-8808-ee33a6be01ec" (UID: "80bd8a11-05a0-44c4-8808-ee33a6be01ec"). InnerVolumeSpecName "kube-api-access-smgms". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:20:08 addons-647117 kubelet[1203]: I0829 18:20:08.989203    1203 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-smgms\" (UniqueName: \"kubernetes.io/projected/80bd8a11-05a0-44c4-8808-ee33a6be01ec-kube-api-access-smgms\") on node \"addons-647117\" DevicePath \"\""
	Aug 29 18:20:08 addons-647117 kubelet[1203]: I0829 18:20:08.989235    1203 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80bd8a11-05a0-44c4-8808-ee33a6be01ec-webhook-cert\") on node \"addons-647117\" DevicePath \"\""
	Aug 29 18:20:09 addons-647117 kubelet[1203]: I0829 18:20:09.009688    1203 scope.go:117] "RemoveContainer" containerID="0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d"
	Aug 29 18:20:09 addons-647117 kubelet[1203]: I0829 18:20:09.028752    1203 scope.go:117] "RemoveContainer" containerID="0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d"
	Aug 29 18:20:09 addons-647117 kubelet[1203]: E0829 18:20:09.029144    1203 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d\": container with ID starting with 0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d not found: ID does not exist" containerID="0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d"
	Aug 29 18:20:09 addons-647117 kubelet[1203]: I0829 18:20:09.029207    1203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d"} err="failed to get container status \"0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d\": rpc error: code = NotFound desc = could not find container \"0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d\": container with ID starting with 0d4cedb7f07b06bbb507aaec05dcc411a1cae9aebecc4e6708d0564fe366591d not found: ID does not exist"
	Aug 29 18:20:09 addons-647117 kubelet[1203]: I0829 18:20:09.437521    1203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80bd8a11-05a0-44c4-8808-ee33a6be01ec" path="/var/lib/kubelet/pods/80bd8a11-05a0-44c4-8808-ee33a6be01ec/volumes"
	Aug 29 18:20:10 addons-647117 kubelet[1203]: E0829 18:20:10.434767    1203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="df7618a9-c213-4e89-9b35-5a5530993d5a"
	Aug 29 18:20:11 addons-647117 kubelet[1203]: E0829 18:20:11.856949    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955611856511838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:20:11 addons-647117 kubelet[1203]: E0829 18:20:11.857265    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955611856511838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747] <==
	I0829 18:07:03.102621       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:07:03.125054       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:07:03.125120       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:07:03.142183       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:07:03.142357       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-647117_782fd552-7659-45f7-a993-62776dcb3c7b!
	I0829 18:07:03.143256       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8a8c384d-e72d-41a0-bfd7-8f50bdcd533c", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-647117_782fd552-7659-45f7-a993-62776dcb3c7b became leader
	I0829 18:07:03.243000       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-647117_782fd552-7659-45f7-a993-62776dcb3c7b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-647117 -n addons-647117
helpers_test.go:261: (dbg) Run:  kubectl --context addons-647117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-647117 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-647117 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-647117/192.168.39.43
	Start Time:       Thu, 29 Aug 2024 18:08:26 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kj2nj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kj2nj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  11m                 default-scheduler  Successfully assigned default/busybox to addons-647117
	  Normal   Pulling    10m (x4 over 11m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)   kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 11m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    99s (x42 over 11m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (303.27s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.413998ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-9pvr6" [3d5398d7-70c3-47b5-8cb8-da262a7c5736] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00344931s
addons_test.go:417: (dbg) Run:  kubectl --context addons-647117 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-647117 top pods -n kube-system: exit status 1 (105.311089ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-nhhtz, age: 9m39.305240919s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-647117 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-647117 top pods -n kube-system: exit status 1 (63.561925ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-nhhtz, age: 9m41.190657181s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-647117 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-647117 top pods -n kube-system: exit status 1 (75.242837ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-nhhtz, age: 9m44.12933067s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-647117 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-647117 top pods -n kube-system: exit status 1 (74.123532ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-nhhtz, age: 9m49.339771461s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-647117 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-647117 top pods -n kube-system: exit status 1 (64.063025ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-nhhtz, age: 9m58.039211855s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-647117 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-647117 top pods -n kube-system: exit status 1 (62.194409ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-nhhtz, age: 10m10.218977397s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-647117 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-647117 top pods -n kube-system: exit status 1 (63.473602ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-nhhtz, age: 10m28.579882147s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-647117 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-647117 top pods -n kube-system: exit status 1 (76.240232ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-nhhtz, age: 10m56.857461942s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-647117 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-647117 top pods -n kube-system: exit status 1 (61.747215ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-nhhtz, age: 11m51.514743049s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-647117 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-647117 top pods -n kube-system: exit status 1 (65.391676ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-nhhtz, age: 12m46.144656393s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-647117 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-647117 top pods -n kube-system: exit status 1 (61.316071ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-nhhtz, age: 13m38.150998212s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-647117 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-647117 top pods -n kube-system: exit status 1 (59.397116ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-nhhtz, age: 14m33.809723922s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-647117 -n addons-647117
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-647117 logs -n 25: (1.292562481s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-366415                                                                     | download-only-366415 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| delete  | -p download-only-105926                                                                     | download-only-105926 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-728877 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | binary-mirror-728877                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38491                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-728877                                                                     | binary-mirror-728877 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | addons-647117                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | addons-647117                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-647117 --wait=true                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-647117 ssh cat                                                                       | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:16 UTC |
	|         | /opt/local-path-provisioner/pvc-802ad026-bf20-44ed-8a63-3b8e6e455a85_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:16 UTC | 29 Aug 24 18:17 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-647117 addons                                                                        | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-647117 addons                                                                        | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | -p addons-647117                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | addons-647117                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | -p addons-647117                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-647117 ip                                                                            | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | addons-647117                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-647117 ssh curl -s                                                                   | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:17 UTC | 29 Aug 24 18:17 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-647117 ip                                                                            | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:20 UTC | 29 Aug 24 18:20 UTC |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:20 UTC | 29 Aug 24 18:20 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-647117 addons disable                                                                | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:20 UTC | 29 Aug 24 18:20 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-647117 addons                                                                        | addons-647117        | jenkins | v1.33.1 | 29 Aug 24 18:21 UTC | 29 Aug 24 18:21 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:06:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:06:13.977708   21003 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:06:13.977815   21003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:06:13.977823   21003 out.go:358] Setting ErrFile to fd 2...
	I0829 18:06:13.977827   21003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:06:13.977999   21003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:06:13.978601   21003 out.go:352] Setting JSON to false
	I0829 18:06:13.979455   21003 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2921,"bootTime":1724951853,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:06:13.979510   21003 start.go:139] virtualization: kvm guest
	I0829 18:06:14.042675   21003 out.go:177] * [addons-647117] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:06:14.104740   21003 notify.go:220] Checking for updates...
	I0829 18:06:14.167604   21003 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:06:14.229702   21003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:06:14.294106   21003 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:06:14.342682   21003 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:06:14.344101   21003 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:06:14.345367   21003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:06:14.346953   21003 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:06:14.377848   21003 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 18:06:14.379196   21003 start.go:297] selected driver: kvm2
	I0829 18:06:14.379209   21003 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:06:14.379220   21003 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:06:14.379903   21003 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:06:14.379987   21003 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:06:14.395270   21003 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:06:14.395314   21003 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:06:14.395519   21003 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:06:14.395554   21003 cni.go:84] Creating CNI manager for ""
	I0829 18:06:14.395565   21003 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:06:14.395574   21003 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:06:14.395622   21003 start.go:340] cluster config:
	{Name:addons-647117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:14.395709   21003 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:06:14.397385   21003 out.go:177] * Starting "addons-647117" primary control-plane node in "addons-647117" cluster
	I0829 18:06:14.398568   21003 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:06:14.398598   21003 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:06:14.398606   21003 cache.go:56] Caching tarball of preloaded images
	I0829 18:06:14.398682   21003 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:06:14.398692   21003 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:06:14.398994   21003 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/config.json ...
	I0829 18:06:14.399012   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/config.json: {Name:mkcc99c38dc1733f24d9d95208d6cd89ecd08f71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:14.399129   21003 start.go:360] acquireMachinesLock for addons-647117: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:06:14.399169   21003 start.go:364] duration metric: took 27.979µs to acquireMachinesLock for "addons-647117"
	I0829 18:06:14.399185   21003 start.go:93] Provisioning new machine with config: &{Name:addons-647117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:06:14.399236   21003 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 18:06:14.400651   21003 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0829 18:06:14.400800   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:14.400842   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:14.414391   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I0829 18:06:14.414771   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:14.415264   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:14.415277   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:14.415573   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:14.415698   21003 main.go:141] libmachine: (addons-647117) Calling .GetMachineName
	I0829 18:06:14.415826   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:14.415924   21003 start.go:159] libmachine.API.Create for "addons-647117" (driver="kvm2")
	I0829 18:06:14.415948   21003 client.go:168] LocalClient.Create starting
	I0829 18:06:14.415980   21003 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem
	I0829 18:06:14.569250   21003 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem
	I0829 18:06:14.895450   21003 main.go:141] libmachine: Running pre-create checks...
	I0829 18:06:14.895478   21003 main.go:141] libmachine: (addons-647117) Calling .PreCreateCheck
	I0829 18:06:14.896002   21003 main.go:141] libmachine: (addons-647117) Calling .GetConfigRaw
	I0829 18:06:14.896427   21003 main.go:141] libmachine: Creating machine...
	I0829 18:06:14.896441   21003 main.go:141] libmachine: (addons-647117) Calling .Create
	I0829 18:06:14.896565   21003 main.go:141] libmachine: (addons-647117) Creating KVM machine...
	I0829 18:06:14.897900   21003 main.go:141] libmachine: (addons-647117) DBG | found existing default KVM network
	I0829 18:06:14.898643   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:14.898505   21025 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1f0}
	I0829 18:06:14.898675   21003 main.go:141] libmachine: (addons-647117) DBG | created network xml: 
	I0829 18:06:14.898690   21003 main.go:141] libmachine: (addons-647117) DBG | <network>
	I0829 18:06:14.898701   21003 main.go:141] libmachine: (addons-647117) DBG |   <name>mk-addons-647117</name>
	I0829 18:06:14.898712   21003 main.go:141] libmachine: (addons-647117) DBG |   <dns enable='no'/>
	I0829 18:06:14.898720   21003 main.go:141] libmachine: (addons-647117) DBG |   
	I0829 18:06:14.898727   21003 main.go:141] libmachine: (addons-647117) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 18:06:14.898734   21003 main.go:141] libmachine: (addons-647117) DBG |     <dhcp>
	I0829 18:06:14.898743   21003 main.go:141] libmachine: (addons-647117) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 18:06:14.898752   21003 main.go:141] libmachine: (addons-647117) DBG |     </dhcp>
	I0829 18:06:14.898766   21003 main.go:141] libmachine: (addons-647117) DBG |   </ip>
	I0829 18:06:14.898775   21003 main.go:141] libmachine: (addons-647117) DBG |   
	I0829 18:06:14.898785   21003 main.go:141] libmachine: (addons-647117) DBG | </network>
	I0829 18:06:14.898795   21003 main.go:141] libmachine: (addons-647117) DBG | 
	I0829 18:06:14.904085   21003 main.go:141] libmachine: (addons-647117) DBG | trying to create private KVM network mk-addons-647117 192.168.39.0/24...
	I0829 18:06:14.968799   21003 main.go:141] libmachine: (addons-647117) DBG | private KVM network mk-addons-647117 192.168.39.0/24 created
	I0829 18:06:14.968849   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:14.968765   21025 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:06:14.968877   21003 main.go:141] libmachine: (addons-647117) Setting up store path in /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117 ...
	I0829 18:06:14.968903   21003 main.go:141] libmachine: (addons-647117) Building disk image from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 18:06:14.968915   21003 main.go:141] libmachine: (addons-647117) Downloading /home/jenkins/minikube-integration/19531-13056/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 18:06:15.221752   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:15.221579   21025 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa...
	I0829 18:06:15.315051   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:15.314930   21025 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/addons-647117.rawdisk...
	I0829 18:06:15.315079   21003 main.go:141] libmachine: (addons-647117) DBG | Writing magic tar header
	I0829 18:06:15.315090   21003 main.go:141] libmachine: (addons-647117) DBG | Writing SSH key tar header
	I0829 18:06:15.315098   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:15.315038   21025 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117 ...
	I0829 18:06:15.315184   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117
	I0829 18:06:15.315224   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines
	I0829 18:06:15.315248   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:06:15.315262   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117 (perms=drwx------)
	I0829 18:06:15.315273   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056
	I0829 18:06:15.315304   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:06:15.315312   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:06:15.315321   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:06:15.315328   21003 main.go:141] libmachine: (addons-647117) DBG | Checking permissions on dir: /home
	I0829 18:06:15.315335   21003 main.go:141] libmachine: (addons-647117) DBG | Skipping /home - not owner
	I0829 18:06:15.315347   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube (perms=drwxr-xr-x)
	I0829 18:06:15.315365   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056 (perms=drwxrwxr-x)
	I0829 18:06:15.315380   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:06:15.315392   21003 main.go:141] libmachine: (addons-647117) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:06:15.315402   21003 main.go:141] libmachine: (addons-647117) Creating domain...
	I0829 18:06:15.316378   21003 main.go:141] libmachine: (addons-647117) define libvirt domain using xml: 
	I0829 18:06:15.316405   21003 main.go:141] libmachine: (addons-647117) <domain type='kvm'>
	I0829 18:06:15.316415   21003 main.go:141] libmachine: (addons-647117)   <name>addons-647117</name>
	I0829 18:06:15.316423   21003 main.go:141] libmachine: (addons-647117)   <memory unit='MiB'>4000</memory>
	I0829 18:06:15.316431   21003 main.go:141] libmachine: (addons-647117)   <vcpu>2</vcpu>
	I0829 18:06:15.316442   21003 main.go:141] libmachine: (addons-647117)   <features>
	I0829 18:06:15.316449   21003 main.go:141] libmachine: (addons-647117)     <acpi/>
	I0829 18:06:15.316456   21003 main.go:141] libmachine: (addons-647117)     <apic/>
	I0829 18:06:15.316462   21003 main.go:141] libmachine: (addons-647117)     <pae/>
	I0829 18:06:15.316466   21003 main.go:141] libmachine: (addons-647117)     
	I0829 18:06:15.316471   21003 main.go:141] libmachine: (addons-647117)   </features>
	I0829 18:06:15.316478   21003 main.go:141] libmachine: (addons-647117)   <cpu mode='host-passthrough'>
	I0829 18:06:15.316485   21003 main.go:141] libmachine: (addons-647117)   
	I0829 18:06:15.316498   21003 main.go:141] libmachine: (addons-647117)   </cpu>
	I0829 18:06:15.316508   21003 main.go:141] libmachine: (addons-647117)   <os>
	I0829 18:06:15.316517   21003 main.go:141] libmachine: (addons-647117)     <type>hvm</type>
	I0829 18:06:15.316539   21003 main.go:141] libmachine: (addons-647117)     <boot dev='cdrom'/>
	I0829 18:06:15.316547   21003 main.go:141] libmachine: (addons-647117)     <boot dev='hd'/>
	I0829 18:06:15.316552   21003 main.go:141] libmachine: (addons-647117)     <bootmenu enable='no'/>
	I0829 18:06:15.316559   21003 main.go:141] libmachine: (addons-647117)   </os>
	I0829 18:06:15.316563   21003 main.go:141] libmachine: (addons-647117)   <devices>
	I0829 18:06:15.316572   21003 main.go:141] libmachine: (addons-647117)     <disk type='file' device='cdrom'>
	I0829 18:06:15.316581   21003 main.go:141] libmachine: (addons-647117)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/boot2docker.iso'/>
	I0829 18:06:15.316590   21003 main.go:141] libmachine: (addons-647117)       <target dev='hdc' bus='scsi'/>
	I0829 18:06:15.316595   21003 main.go:141] libmachine: (addons-647117)       <readonly/>
	I0829 18:06:15.316602   21003 main.go:141] libmachine: (addons-647117)     </disk>
	I0829 18:06:15.316607   21003 main.go:141] libmachine: (addons-647117)     <disk type='file' device='disk'>
	I0829 18:06:15.316626   21003 main.go:141] libmachine: (addons-647117)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:06:15.316642   21003 main.go:141] libmachine: (addons-647117)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/addons-647117.rawdisk'/>
	I0829 18:06:15.316654   21003 main.go:141] libmachine: (addons-647117)       <target dev='hda' bus='virtio'/>
	I0829 18:06:15.316661   21003 main.go:141] libmachine: (addons-647117)     </disk>
	I0829 18:06:15.316669   21003 main.go:141] libmachine: (addons-647117)     <interface type='network'>
	I0829 18:06:15.316676   21003 main.go:141] libmachine: (addons-647117)       <source network='mk-addons-647117'/>
	I0829 18:06:15.316682   21003 main.go:141] libmachine: (addons-647117)       <model type='virtio'/>
	I0829 18:06:15.316691   21003 main.go:141] libmachine: (addons-647117)     </interface>
	I0829 18:06:15.316697   21003 main.go:141] libmachine: (addons-647117)     <interface type='network'>
	I0829 18:06:15.316707   21003 main.go:141] libmachine: (addons-647117)       <source network='default'/>
	I0829 18:06:15.316722   21003 main.go:141] libmachine: (addons-647117)       <model type='virtio'/>
	I0829 18:06:15.316738   21003 main.go:141] libmachine: (addons-647117)     </interface>
	I0829 18:06:15.316747   21003 main.go:141] libmachine: (addons-647117)     <serial type='pty'>
	I0829 18:06:15.316759   21003 main.go:141] libmachine: (addons-647117)       <target port='0'/>
	I0829 18:06:15.316779   21003 main.go:141] libmachine: (addons-647117)     </serial>
	I0829 18:06:15.316794   21003 main.go:141] libmachine: (addons-647117)     <console type='pty'>
	I0829 18:06:15.316812   21003 main.go:141] libmachine: (addons-647117)       <target type='serial' port='0'/>
	I0829 18:06:15.316825   21003 main.go:141] libmachine: (addons-647117)     </console>
	I0829 18:06:15.316835   21003 main.go:141] libmachine: (addons-647117)     <rng model='virtio'>
	I0829 18:06:15.316848   21003 main.go:141] libmachine: (addons-647117)       <backend model='random'>/dev/random</backend>
	I0829 18:06:15.316855   21003 main.go:141] libmachine: (addons-647117)     </rng>
	I0829 18:06:15.316860   21003 main.go:141] libmachine: (addons-647117)     
	I0829 18:06:15.316866   21003 main.go:141] libmachine: (addons-647117)     
	I0829 18:06:15.316871   21003 main.go:141] libmachine: (addons-647117)   </devices>
	I0829 18:06:15.316880   21003 main.go:141] libmachine: (addons-647117) </domain>
	I0829 18:06:15.316887   21003 main.go:141] libmachine: (addons-647117) 
	I0829 18:06:15.323470   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:5e:cf:4e in network default
	I0829 18:06:15.324032   21003 main.go:141] libmachine: (addons-647117) Ensuring networks are active...
	I0829 18:06:15.324048   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:15.324701   21003 main.go:141] libmachine: (addons-647117) Ensuring network default is active
	I0829 18:06:15.325084   21003 main.go:141] libmachine: (addons-647117) Ensuring network mk-addons-647117 is active
	I0829 18:06:15.325712   21003 main.go:141] libmachine: (addons-647117) Getting domain xml...
	I0829 18:06:15.326373   21003 main.go:141] libmachine: (addons-647117) Creating domain...
	I0829 18:06:16.712917   21003 main.go:141] libmachine: (addons-647117) Waiting to get IP...
	I0829 18:06:16.713812   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:16.714232   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:16.714268   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:16.714191   21025 retry.go:31] will retry after 238.340471ms: waiting for machine to come up
	I0829 18:06:16.954554   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:16.954978   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:16.955001   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:16.954942   21025 retry.go:31] will retry after 341.720897ms: waiting for machine to come up
	I0829 18:06:17.298471   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:17.298940   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:17.298959   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:17.298900   21025 retry.go:31] will retry after 367.433652ms: waiting for machine to come up
	I0829 18:06:17.668160   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:17.668555   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:17.668592   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:17.668512   21025 retry.go:31] will retry after 516.863981ms: waiting for machine to come up
	I0829 18:06:18.187183   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:18.187670   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:18.187696   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:18.187622   21025 retry.go:31] will retry after 716.140795ms: waiting for machine to come up
	I0829 18:06:18.905500   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:18.905827   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:18.905850   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:18.905787   21025 retry.go:31] will retry after 722.824428ms: waiting for machine to come up
	I0829 18:06:19.630367   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:19.630812   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:19.630841   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:19.630788   21025 retry.go:31] will retry after 1.117686988s: waiting for machine to come up
	I0829 18:06:20.750072   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:20.750586   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:20.750618   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:20.750537   21025 retry.go:31] will retry after 1.201180121s: waiting for machine to come up
	I0829 18:06:21.953781   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:21.954227   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:21.954255   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:21.954176   21025 retry.go:31] will retry after 1.317171091s: waiting for machine to come up
	I0829 18:06:23.273606   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:23.274028   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:23.274056   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:23.273995   21025 retry.go:31] will retry after 2.013319683s: waiting for machine to come up
	I0829 18:06:25.289339   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:25.289856   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:25.289881   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:25.289815   21025 retry.go:31] will retry after 2.820105587s: waiting for machine to come up
	I0829 18:06:28.113685   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:28.113965   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:28.113988   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:28.113931   21025 retry.go:31] will retry after 2.971291296s: waiting for machine to come up
	I0829 18:06:31.088861   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:31.089282   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find current IP address of domain addons-647117 in network mk-addons-647117
	I0829 18:06:31.089302   21003 main.go:141] libmachine: (addons-647117) DBG | I0829 18:06:31.089247   21025 retry.go:31] will retry after 3.52398133s: waiting for machine to come up
	I0829 18:06:34.615265   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.615739   21003 main.go:141] libmachine: (addons-647117) Found IP for machine: 192.168.39.43
	I0829 18:06:34.615757   21003 main.go:141] libmachine: (addons-647117) Reserving static IP address...
	I0829 18:06:34.615765   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has current primary IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.616209   21003 main.go:141] libmachine: (addons-647117) DBG | unable to find host DHCP lease matching {name: "addons-647117", mac: "52:54:00:b2:0d:0e", ip: "192.168.39.43"} in network mk-addons-647117
	I0829 18:06:34.684039   21003 main.go:141] libmachine: (addons-647117) DBG | Getting to WaitForSSH function...
	I0829 18:06:34.684068   21003 main.go:141] libmachine: (addons-647117) Reserved static IP address: 192.168.39.43
	I0829 18:06:34.684097   21003 main.go:141] libmachine: (addons-647117) Waiting for SSH to be available...
	I0829 18:06:34.686579   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.686973   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:34.687021   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.687238   21003 main.go:141] libmachine: (addons-647117) DBG | Using SSH client type: external
	I0829 18:06:34.687266   21003 main.go:141] libmachine: (addons-647117) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa (-rw-------)
	I0829 18:06:34.687303   21003 main.go:141] libmachine: (addons-647117) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:06:34.687317   21003 main.go:141] libmachine: (addons-647117) DBG | About to run SSH command:
	I0829 18:06:34.687334   21003 main.go:141] libmachine: (addons-647117) DBG | exit 0
	I0829 18:06:34.813742   21003 main.go:141] libmachine: (addons-647117) DBG | SSH cmd err, output: <nil>: 
	I0829 18:06:34.814023   21003 main.go:141] libmachine: (addons-647117) KVM machine creation complete!
	I0829 18:06:34.814355   21003 main.go:141] libmachine: (addons-647117) Calling .GetConfigRaw
	I0829 18:06:34.814860   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:34.815029   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:34.815194   21003 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:06:34.815210   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:34.816482   21003 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:06:34.816493   21003 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:06:34.816499   21003 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:06:34.816504   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:34.818985   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.819310   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:34.819338   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.819489   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:34.819706   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:34.819854   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:34.820002   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:34.820159   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:34.820371   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:34.820389   21003 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:06:34.921578   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:06:34.921611   21003 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:06:34.921625   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:34.924576   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.924991   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:34.925016   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:34.925174   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:34.925364   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:34.925535   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:34.925681   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:34.925862   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:34.926048   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:34.926062   21003 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:06:35.026824   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:06:35.026889   21003 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:06:35.026897   21003 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:06:35.026904   21003 main.go:141] libmachine: (addons-647117) Calling .GetMachineName
	I0829 18:06:35.027145   21003 buildroot.go:166] provisioning hostname "addons-647117"
	I0829 18:06:35.027170   21003 main.go:141] libmachine: (addons-647117) Calling .GetMachineName
	I0829 18:06:35.027344   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.029702   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.030060   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.030099   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.030232   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.030413   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.030536   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.030687   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.030879   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:35.031071   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:35.031084   21003 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-647117 && echo "addons-647117" | sudo tee /etc/hostname
	I0829 18:06:35.143742   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-647117
	
	I0829 18:06:35.143777   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.146325   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.146651   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.146679   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.146798   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.146981   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.147130   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.147305   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.147468   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:35.147673   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:35.147697   21003 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-647117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-647117/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-647117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:06:35.254118   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:06:35.254140   21003 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:06:35.254159   21003 buildroot.go:174] setting up certificates
	I0829 18:06:35.254169   21003 provision.go:84] configureAuth start
	I0829 18:06:35.254180   21003 main.go:141] libmachine: (addons-647117) Calling .GetMachineName
	I0829 18:06:35.254506   21003 main.go:141] libmachine: (addons-647117) Calling .GetIP
	I0829 18:06:35.256912   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.257308   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.257336   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.257542   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.259793   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.260096   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.260130   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.260195   21003 provision.go:143] copyHostCerts
	I0829 18:06:35.260261   21003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:06:35.260392   21003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:06:35.260483   21003 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:06:35.260557   21003 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.addons-647117 san=[127.0.0.1 192.168.39.43 addons-647117 localhost minikube]
	I0829 18:06:35.482587   21003 provision.go:177] copyRemoteCerts
	I0829 18:06:35.482639   21003 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:06:35.482659   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.485179   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.485582   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.485615   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.485697   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.485936   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.486060   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.486278   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:35.563694   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:06:35.586261   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:06:35.607564   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 18:06:35.628579   21003 provision.go:87] duration metric: took 374.398756ms to configureAuth
	I0829 18:06:35.628613   21003 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:06:35.628805   21003 config.go:182] Loaded profile config "addons-647117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:35.628886   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.631347   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.631736   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.631762   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.631917   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.632078   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.632214   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.632368   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.632522   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:35.632739   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:35.632758   21003 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:06:35.841964   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:06:35.841995   21003 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:06:35.842008   21003 main.go:141] libmachine: (addons-647117) Calling .GetURL
	I0829 18:06:35.843265   21003 main.go:141] libmachine: (addons-647117) DBG | Using libvirt version 6000000
	I0829 18:06:35.845052   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.845418   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.845442   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.845675   21003 main.go:141] libmachine: Docker is up and running!
	I0829 18:06:35.845695   21003 main.go:141] libmachine: Reticulating splines...
	I0829 18:06:35.845701   21003 client.go:171] duration metric: took 21.429743968s to LocalClient.Create
	I0829 18:06:35.845719   21003 start.go:167] duration metric: took 21.429794926s to libmachine.API.Create "addons-647117"
	I0829 18:06:35.845736   21003 start.go:293] postStartSetup for "addons-647117" (driver="kvm2")
	I0829 18:06:35.845745   21003 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:06:35.845761   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:35.846039   21003 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:06:35.846062   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.848219   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.848637   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.848666   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.848784   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.848951   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.849108   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.849229   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:35.928027   21003 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:06:35.932082   21003 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:06:35.932107   21003 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 18:06:35.932175   21003 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 18:06:35.932199   21003 start.go:296] duration metric: took 86.457988ms for postStartSetup
	I0829 18:06:35.932245   21003 main.go:141] libmachine: (addons-647117) Calling .GetConfigRaw
	I0829 18:06:35.932768   21003 main.go:141] libmachine: (addons-647117) Calling .GetIP
	I0829 18:06:35.935311   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.935660   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.935689   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.935874   21003 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/config.json ...
	I0829 18:06:35.936046   21003 start.go:128] duration metric: took 21.536800088s to createHost
	I0829 18:06:35.936069   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:35.938226   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.938550   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:35.938580   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:35.938691   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:35.938940   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.939092   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:35.939226   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:35.939371   21003 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:35.939518   21003 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0829 18:06:35.939538   21003 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:06:36.038471   21003 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724954796.013287706
	
	I0829 18:06:36.038494   21003 fix.go:216] guest clock: 1724954796.013287706
	I0829 18:06:36.038502   21003 fix.go:229] Guest: 2024-08-29 18:06:36.013287706 +0000 UTC Remote: 2024-08-29 18:06:35.936057575 +0000 UTC m=+21.991416237 (delta=77.230131ms)
	I0829 18:06:36.038547   21003 fix.go:200] guest clock delta is within tolerance: 77.230131ms
	I0829 18:06:36.038563   21003 start.go:83] releasing machines lock for "addons-647117", held for 21.639379915s
	I0829 18:06:36.038587   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:36.038894   21003 main.go:141] libmachine: (addons-647117) Calling .GetIP
	I0829 18:06:36.041687   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.042103   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:36.042129   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.042309   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:36.042820   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:36.042990   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:36.043053   21003 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:06:36.043093   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:36.043222   21003 ssh_runner.go:195] Run: cat /version.json
	I0829 18:06:36.043244   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:36.045522   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.045759   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.045868   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:36.045890   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.046150   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:36.046153   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:36.046208   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:36.046302   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:36.046386   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:36.046567   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:36.046570   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:36.046716   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:36.046731   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:36.046852   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:36.118579   21003 ssh_runner.go:195] Run: systemctl --version
	I0829 18:06:36.156970   21003 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:06:36.311217   21003 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:06:36.316594   21003 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:06:36.316675   21003 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:06:36.332219   21003 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:06:36.332250   21003 start.go:495] detecting cgroup driver to use...
	I0829 18:06:36.332314   21003 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:06:36.347317   21003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:06:36.360521   21003 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:06:36.360590   21003 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:06:36.373585   21003 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:06:36.386343   21003 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:06:36.502547   21003 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:06:36.637748   21003 docker.go:233] disabling docker service ...
	I0829 18:06:36.637830   21003 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:06:36.651446   21003 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:06:36.663735   21003 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:06:36.798359   21003 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:06:36.922508   21003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:06:36.935648   21003 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:06:36.952902   21003 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:06:36.952958   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:36.963059   21003 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:06:36.963140   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:36.973105   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:36.982774   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:36.992245   21003 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:06:37.001920   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:37.011179   21003 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:37.026117   21003 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:06:37.035522   21003 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:06:37.043886   21003 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 18:06:37.043934   21003 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 18:06:37.055999   21003 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:06:37.064714   21003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:37.196530   21003 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:06:37.287929   21003 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:06:37.288028   21003 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:06:37.292396   21003 start.go:563] Will wait 60s for crictl version
	I0829 18:06:37.292454   21003 ssh_runner.go:195] Run: which crictl
	I0829 18:06:37.296073   21003 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:06:37.332725   21003 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:06:37.332849   21003 ssh_runner.go:195] Run: crio --version
	I0829 18:06:37.359173   21003 ssh_runner.go:195] Run: crio --version
	I0829 18:06:37.388107   21003 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:06:37.389284   21003 main.go:141] libmachine: (addons-647117) Calling .GetIP
	I0829 18:06:37.391507   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:37.391814   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:37.391841   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:37.391979   21003 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:06:37.395789   21003 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:06:37.408717   21003 kubeadm.go:883] updating cluster {Name:addons-647117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:06:37.408820   21003 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:06:37.408873   21003 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:06:37.443962   21003 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 18:06:37.444029   21003 ssh_runner.go:195] Run: which lz4
	I0829 18:06:37.447695   21003 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 18:06:37.451549   21003 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 18:06:37.451575   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 18:06:38.585685   21003 crio.go:462] duration metric: took 1.138016489s to copy over tarball
	I0829 18:06:38.585747   21003 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 18:06:40.668015   21003 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082235438s)
	I0829 18:06:40.668044   21003 crio.go:469] duration metric: took 2.082332165s to extract the tarball
	I0829 18:06:40.668052   21003 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 18:06:40.704995   21003 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:06:40.744652   21003 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:06:40.744681   21003 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:06:40.744691   21003 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.31.0 crio true true} ...
	I0829 18:06:40.744815   21003 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-647117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:06:40.744879   21003 ssh_runner.go:195] Run: crio config
	I0829 18:06:40.799521   21003 cni.go:84] Creating CNI manager for ""
	I0829 18:06:40.799538   21003 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:06:40.799554   21003 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:06:40.799578   21003 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-647117 NodeName:addons-647117 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:06:40.799725   21003 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-647117"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:06:40.799784   21003 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:06:40.809042   21003 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:06:40.809100   21003 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:06:40.817470   21003 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0829 18:06:40.832347   21003 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:06:40.846895   21003 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0829 18:06:40.861793   21003 ssh_runner.go:195] Run: grep 192.168.39.43	control-plane.minikube.internal$ /etc/hosts
	I0829 18:06:40.865178   21003 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.43	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:06:40.875661   21003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:40.982884   21003 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:06:40.997705   21003 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117 for IP: 192.168.39.43
	I0829 18:06:40.997731   21003 certs.go:194] generating shared ca certs ...
	I0829 18:06:40.997746   21003 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:40.997866   21003 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 18:06:41.043528   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt ...
	I0829 18:06:41.043558   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt: {Name:mkea6106ba4ad65ce6f8bed60295c8f24482327b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.043722   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key ...
	I0829 18:06:41.043735   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key: {Name:mke9ce6afa81d222f2c50749e4037b87a5d38dd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.043805   21003 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 18:06:41.128075   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt ...
	I0829 18:06:41.128106   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt: {Name:mkdbc53401c430ff97fec9666f2d5f302313570c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.128259   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key ...
	I0829 18:06:41.128270   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key: {Name:mk367415a361fb5a9c7503ec33cd8caa1e52aa57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.128329   21003 certs.go:256] generating profile certs ...
	I0829 18:06:41.128382   21003 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.key
	I0829 18:06:41.128395   21003 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt with IP's: []
	I0829 18:06:41.221652   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt ...
	I0829 18:06:41.221679   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: {Name:mk7255e28303157d05d1b68e28117d8e36fbd22c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.221828   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.key ...
	I0829 18:06:41.221838   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.key: {Name:mkbf2b01f6f057886492f2c68b0e29df0e06c856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.222390   21003 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key.d21cc6c9
	I0829 18:06:41.222413   21003 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt.d21cc6c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.43]
	I0829 18:06:41.392081   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt.d21cc6c9 ...
	I0829 18:06:41.392114   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt.d21cc6c9: {Name:mkd530b794cbdec523005231e4a057aefd476fa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.392297   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key.d21cc6c9 ...
	I0829 18:06:41.392313   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key.d21cc6c9: {Name:mk3e2c877bb82fbb95364dcb98f1881ca9941820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.392417   21003 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt.d21cc6c9 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt
	I0829 18:06:41.392493   21003 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key.d21cc6c9 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key
	I0829 18:06:41.392538   21003 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.key
	I0829 18:06:41.392555   21003 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.crt with IP's: []
	I0829 18:06:41.549956   21003 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.crt ...
	I0829 18:06:41.549986   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.crt: {Name:mke718e76c91b48339bb92cf2bf888e30bb5dc2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.550174   21003 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.key ...
	I0829 18:06:41.550190   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.key: {Name:mkd9cbaa4b6e0247b270644d1a1f676717828d7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:41.550382   21003 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:06:41.550419   21003 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:06:41.550440   21003 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:06:41.550461   21003 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 18:06:41.551061   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:06:41.574578   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:06:41.596186   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:06:41.617109   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:06:41.638159   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:06:41.661044   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:06:41.698709   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:06:41.722591   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 18:06:41.743216   21003 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:06:41.763431   21003 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:06:41.777864   21003 ssh_runner.go:195] Run: openssl version
	I0829 18:06:41.783206   21003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:06:41.793369   21003 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:41.797576   21003 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:41.797635   21003 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:41.803014   21003 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:06:41.812720   21003 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:06:41.816257   21003 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:06:41.816304   21003 kubeadm.go:392] StartCluster: {Name:addons-647117 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-647117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:41.816395   21003 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:06:41.816453   21003 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:06:41.849244   21003 cri.go:89] found id: ""
	I0829 18:06:41.849319   21003 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:06:41.858563   21003 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:06:41.867292   21003 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:06:41.876016   21003 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:06:41.876037   21003 kubeadm.go:157] found existing configuration files:
	
	I0829 18:06:41.876080   21003 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:06:41.884227   21003 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:06:41.884280   21003 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:06:41.892834   21003 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:06:41.900929   21003 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:06:41.900979   21003 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:06:41.909576   21003 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:06:41.917827   21003 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:06:41.917879   21003 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:06:41.926476   21003 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:06:41.934804   21003 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:06:41.934856   21003 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:06:41.943606   21003 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 18:06:41.992646   21003 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:06:41.992776   21003 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:06:42.092351   21003 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:06:42.092518   21003 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:06:42.092669   21003 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:06:42.101559   21003 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:06:42.104509   21003 out.go:235]   - Generating certificates and keys ...
	I0829 18:06:42.104621   21003 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:06:42.104687   21003 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:06:42.537741   21003 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:06:42.671932   21003 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:06:42.772862   21003 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:06:42.890551   21003 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:06:43.201812   21003 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:06:43.202000   21003 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-647117 localhost] and IPs [192.168.39.43 127.0.0.1 ::1]
	I0829 18:06:43.375327   21003 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:06:43.375499   21003 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-647117 localhost] and IPs [192.168.39.43 127.0.0.1 ::1]
	I0829 18:06:43.548880   21003 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:06:43.670158   21003 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:06:43.818859   21003 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:06:43.818919   21003 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:06:44.033791   21003 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:06:44.234114   21003 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:06:44.283551   21003 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:06:44.377485   21003 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:06:44.608153   21003 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:06:44.608910   21003 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:06:44.611448   21003 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:06:44.613436   21003 out.go:235]   - Booting up control plane ...
	I0829 18:06:44.613569   21003 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:06:44.613680   21003 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:06:44.613772   21003 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:06:44.628134   21003 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:06:44.634006   21003 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:06:44.634068   21003 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:06:44.748283   21003 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:06:44.748472   21003 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:06:45.249786   21003 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.995827ms
	I0829 18:06:45.249887   21003 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:06:50.747506   21003 kubeadm.go:310] [api-check] The API server is healthy after 5.501622111s
	I0829 18:06:50.761005   21003 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:06:50.778931   21003 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:06:50.804583   21003 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:06:50.804806   21003 kubeadm.go:310] [mark-control-plane] Marking the node addons-647117 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:06:50.815965   21003 kubeadm.go:310] [bootstrap-token] Using token: wiq59h.4ta20vef60ifolag
	I0829 18:06:50.817393   21003 out.go:235]   - Configuring RBAC rules ...
	I0829 18:06:50.817515   21003 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:06:50.823008   21003 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:06:50.829342   21003 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:06:50.834828   21003 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:06:50.837480   21003 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:06:50.840740   21003 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:06:51.153540   21003 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:06:51.619414   21003 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:06:52.154068   21003 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:06:52.154113   21003 kubeadm.go:310] 
	I0829 18:06:52.154186   21003 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:06:52.154195   21003 kubeadm.go:310] 
	I0829 18:06:52.154271   21003 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:06:52.154279   21003 kubeadm.go:310] 
	I0829 18:06:52.154298   21003 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:06:52.154372   21003 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:06:52.154426   21003 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:06:52.154436   21003 kubeadm.go:310] 
	I0829 18:06:52.154498   21003 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:06:52.154509   21003 kubeadm.go:310] 
	I0829 18:06:52.154564   21003 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:06:52.154571   21003 kubeadm.go:310] 
	I0829 18:06:52.154643   21003 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:06:52.154739   21003 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:06:52.154828   21003 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:06:52.154837   21003 kubeadm.go:310] 
	I0829 18:06:52.154960   21003 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:06:52.155076   21003 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:06:52.155085   21003 kubeadm.go:310] 
	I0829 18:06:52.155192   21003 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wiq59h.4ta20vef60ifolag \
	I0829 18:06:52.155350   21003 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 18:06:52.155395   21003 kubeadm.go:310] 	--control-plane 
	I0829 18:06:52.155404   21003 kubeadm.go:310] 
	I0829 18:06:52.155507   21003 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:06:52.155517   21003 kubeadm.go:310] 
	I0829 18:06:52.155624   21003 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wiq59h.4ta20vef60ifolag \
	I0829 18:06:52.155743   21003 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 18:06:52.156619   21003 kubeadm.go:310] W0829 18:06:41.972258     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:52.156965   21003 kubeadm.go:310] W0829 18:06:41.973234     816 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:52.157113   21003 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:06:52.157145   21003 cni.go:84] Creating CNI manager for ""
	I0829 18:06:52.157162   21003 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:06:52.158997   21003 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 18:06:52.160298   21003 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 18:06:52.169724   21003 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 18:06:52.191549   21003 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:06:52.191676   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:52.191714   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-647117 minikube.k8s.io/updated_at=2024_08_29T18_06_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=addons-647117 minikube.k8s.io/primary=true
	I0829 18:06:52.209914   21003 ops.go:34] apiserver oom_adj: -16
	I0829 18:06:52.324976   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:52.825811   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:53.325292   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:53.825112   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:54.325820   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:54.825675   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:55.325178   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:55.825703   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:56.324989   21003 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:56.414413   21003 kubeadm.go:1113] duration metric: took 4.222809669s to wait for elevateKubeSystemPrivileges
	I0829 18:06:56.414449   21003 kubeadm.go:394] duration metric: took 14.598146711s to StartCluster
	I0829 18:06:56.414471   21003 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:56.414595   21003 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:06:56.415169   21003 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:56.415361   21003 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:06:56.415396   21003 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:06:56.415462   21003 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:06:56.415582   21003 addons.go:69] Setting yakd=true in profile "addons-647117"
	I0829 18:06:56.415605   21003 addons.go:69] Setting registry=true in profile "addons-647117"
	I0829 18:06:56.415609   21003 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-647117"
	I0829 18:06:56.415625   21003 addons.go:69] Setting helm-tiller=true in profile "addons-647117"
	I0829 18:06:56.415629   21003 addons.go:69] Setting volcano=true in profile "addons-647117"
	I0829 18:06:56.415588   21003 addons.go:69] Setting ingress=true in profile "addons-647117"
	I0829 18:06:56.415645   21003 addons.go:234] Setting addon registry=true in "addons-647117"
	I0829 18:06:56.415651   21003 addons.go:234] Setting addon helm-tiller=true in "addons-647117"
	I0829 18:06:56.415663   21003 addons.go:234] Setting addon volcano=true in "addons-647117"
	I0829 18:06:56.415667   21003 addons.go:69] Setting volumesnapshots=true in profile "addons-647117"
	I0829 18:06:56.415668   21003 addons.go:69] Setting storage-provisioner=true in profile "addons-647117"
	I0829 18:06:56.415681   21003 addons.go:234] Setting addon volumesnapshots=true in "addons-647117"
	I0829 18:06:56.415685   21003 addons.go:234] Setting addon storage-provisioner=true in "addons-647117"
	I0829 18:06:56.415691   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415696   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415702   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415706   21003 addons.go:69] Setting inspektor-gadget=true in profile "addons-647117"
	I0829 18:06:56.415708   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415724   21003 addons.go:234] Setting addon inspektor-gadget=true in "addons-647117"
	I0829 18:06:56.415751   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415641   21003 addons.go:234] Setting addon yakd=true in "addons-647117"
	I0829 18:06:56.415802   21003 addons.go:69] Setting ingress-dns=true in profile "addons-647117"
	I0829 18:06:56.415696   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415835   21003 addons.go:234] Setting addon ingress-dns=true in "addons-647117"
	I0829 18:06:56.415836   21003 addons.go:69] Setting metrics-server=true in profile "addons-647117"
	I0829 18:06:56.415856   21003 addons.go:234] Setting addon metrics-server=true in "addons-647117"
	I0829 18:06:56.415872   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.415889   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416119   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.415659   21003 addons.go:234] Setting addon ingress=true in "addons-647117"
	I0829 18:06:56.416144   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416143   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416147   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416156   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416160   21003 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-647117"
	I0829 18:06:56.416176   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416181   21003 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-647117"
	I0829 18:06:56.415611   21003 addons.go:69] Setting default-storageclass=true in profile "addons-647117"
	I0829 18:06:56.416203   21003 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-647117"
	I0829 18:06:56.416210   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416228   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416233   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416146   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416284   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.415822   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416327   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416344   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416347   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416361   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416433   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.415659   21003 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-647117"
	I0829 18:06:56.415615   21003 addons.go:69] Setting gcp-auth=true in profile "addons-647117"
	I0829 18:06:56.416493   21003 mustload.go:65] Loading cluster: addons-647117
	I0829 18:06:56.416505   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416536   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416457   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.415599   21003 addons.go:69] Setting cloud-spanner=true in profile "addons-647117"
	I0829 18:06:56.416608   21003 addons.go:234] Setting addon cloud-spanner=true in "addons-647117"
	I0829 18:06:56.416650   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416663   21003 config.go:182] Loaded profile config "addons-647117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:56.416670   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416730   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416786   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416818   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.416884   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.416926   21003 config.go:182] Loaded profile config "addons-647117": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:06:56.416653   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.416993   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.415606   21003 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-647117"
	I0829 18:06:56.417062   21003 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-647117"
	I0829 18:06:56.417124   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.417157   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.417190   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.417211   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.417237   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.417759   21003 out.go:177] * Verifying Kubernetes components...
	I0829 18:06:56.431414   21003 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:56.436670   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0829 18:06:56.437146   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.437246   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42247
	I0829 18:06:56.437394   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36717
	I0829 18:06:56.437610   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.437628   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.437687   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38445
	I0829 18:06:56.437809   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.437950   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.438197   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.438211   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.438343   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.438359   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.438942   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.438986   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.442810   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.442949   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0829 18:06:56.442939   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.443564   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.443717   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.443773   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.444026   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.444479   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.444515   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.446472   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.446513   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.446968   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.447446   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.447153   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.447525   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.447738   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.447816   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.448300   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.448328   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.451235   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.451255   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.451627   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.452195   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.452230   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.452570   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0829 18:06:56.453048   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.453560   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.453579   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.453925   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.454471   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.454511   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.472672   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0829 18:06:56.473419   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.478181   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0829 18:06:56.478196   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36565
	I0829 18:06:56.478338   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37581
	I0829 18:06:56.478756   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.478771   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.478855   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.479244   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.479270   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.479636   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.479717   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I0829 18:06:56.479939   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.479951   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.480164   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.480179   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.480246   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.480250   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.480279   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.480366   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.480555   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.480617   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0829 18:06:56.480802   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.480928   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.480946   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.481087   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.481111   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.481293   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.481700   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.481719   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.481740   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.481751   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.482059   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I0829 18:06:56.482184   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.482473   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.482798   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.482822   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.482948   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.482978   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.483112   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.483588   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.483605   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.485285   21003 addons.go:234] Setting addon default-storageclass=true in "addons-647117"
	I0829 18:06:56.485323   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.485708   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.485742   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.485941   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
	I0829 18:06:56.485968   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.486037   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43873
	I0829 18:06:56.486453   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.486581   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.486798   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.486833   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.487055   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.487069   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.487187   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.487201   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.487491   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.487517   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.487987   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.488025   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.488059   21003 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:06:56.488507   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.488534   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.488746   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.489095   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.489117   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.490168   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.490301   21003 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:06:56.491450   21003 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:06:56.491467   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:06:56.491485   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.492948   21003 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-647117"
	I0829 18:06:56.492988   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:06:56.493330   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.493369   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.496719   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.497204   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.497226   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.498188   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36927
	I0829 18:06:56.498268   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.498509   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.498603   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.498650   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.498793   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.499537   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.499570   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.499902   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.500440   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.500481   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.501294   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46335
	I0829 18:06:56.502049   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.502504   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.502535   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.503107   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.503657   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.503701   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.507276   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44693
	I0829 18:06:56.507768   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.508382   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.508406   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.508722   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.508861   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.510677   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.512639   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:06:56.513776   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:06:56.513797   21003 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:06:56.513817   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.515319   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37999
	I0829 18:06:56.515800   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.516786   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.516805   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.516856   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.517214   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.517235   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.517370   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.517505   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.517553   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.517600   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.517708   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.518168   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.518208   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.532347   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0829 18:06:56.532894   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0829 18:06:56.533030   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.533414   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.533591   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.533603   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.534067   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.534409   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.534422   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.534514   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.534861   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.535226   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.535924   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38505
	I0829 18:06:56.536353   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.536420   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35675
	I0829 18:06:56.536755   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.536837   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0829 18:06:56.537295   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.537312   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.537384   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.537694   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.537869   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.538075   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.538716   21003 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:06:56.538773   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.538789   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.538859   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I0829 18:06:56.539014   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.539114   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42041
	I0829 18:06:56.539308   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.539327   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.539346   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.539533   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.539598   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.539646   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.540006   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.540014   21003 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:56.540022   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.540028   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:06:56.540045   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.540163   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.540232   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.540650   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.541057   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:06:56.541096   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:06:56.541262   21003 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:06:56.541638   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.541311   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.540506   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.541936   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.541939   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.541995   21003 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:06:56.543193   21003 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:56.543211   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:06:56.543229   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.544013   21003 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:56.544028   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:06:56.544045   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.545403   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.545625   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.545907   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.546106   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.546226   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.546589   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.546667   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.546715   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.547188   21003 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:06:56.547565   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.548163   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.547666   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:56.548188   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:06:56.547970   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.548506   21003 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:06:56.548516   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:06:56.548518   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:06:56.548537   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.548541   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:56.548548   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:06:56.548556   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:06:56.548563   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:06:56.548753   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.548823   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.548937   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.549134   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.549334   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:06:56.549403   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.549468   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0829 18:06:56.549564   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.549609   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.549623   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.549772   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.549834   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.549914   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.549974   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.550110   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.550260   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.550571   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:06:56.550571   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:06:56.550591   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	W0829 18:06:56.550660   21003 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0829 18:06:56.550690   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.550703   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.551269   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.551508   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.552601   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:06:56.552711   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.552948   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.553349   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.553376   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.553418   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.553567   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.553722   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.553833   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.554958   21003 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:06:56.554967   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:06:56.556064   21003 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:06:56.556082   21003 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:06:56.556101   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.556540   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0829 18:06:56.557101   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.557246   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:06:56.557716   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.557731   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.558069   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.558265   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.559622   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.559739   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:06:56.560081   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.560099   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.560311   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.560461   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.560522   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42051
	I0829 18:06:56.560720   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38711
	I0829 18:06:56.560690   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.560989   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.561397   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0829 18:06:56.561537   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.561727   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:06:56.561802   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.561893   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.562018   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.562038   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.562455   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0829 18:06:56.562581   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.562586   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.562691   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.562761   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.563130   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.563148   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.563265   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.563283   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.563450   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.563577   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:06:56.563731   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.563743   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.563805   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.564012   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.564052   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42963
	I0829 18:06:56.564704   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.564786   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.564795   21003 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:06:56.565163   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.565201   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.565775   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.565872   21003 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:06:56.565953   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:06:56.565966   21003 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:06:56.565982   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.565984   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.566000   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.566529   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.566553   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.566600   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.566876   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:06:56.566891   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:06:56.566913   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.566921   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.567522   21003 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:06:56.568498   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.568666   21003 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:06:56.568680   21003 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:06:56.568693   21003 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 18:06:56.568712   21003 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:06:56.568697   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.569831   21003 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:06:56.569913   21003 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:56.569926   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:06:56.569945   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.570902   21003 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:06:56.571368   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.571392   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.571846   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.571869   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.571947   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.571967   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.572003   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.572159   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.572233   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.572258   21003 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:56.572364   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.572388   21003 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:56.572399   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:06:56.572413   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.572417   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.572536   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.572741   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.572872   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.573786   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.573963   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.574278   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.574356   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.574444   21003 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:56.574528   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.574569   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.574785   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.574857   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.575066   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.575072   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.575270   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.575284   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.575483   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.575644   21003 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:56.575656   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:06:56.575670   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.575415   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.577142   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I0829 18:06:56.577490   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:06:56.577544   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.577856   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:06:56.577875   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:06:56.578165   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.578188   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.578358   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.578394   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:06:56.578517   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.578591   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:06:56.578730   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.578852   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:06:56.582225   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:06:56.582235   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.582242   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.582251   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.582262   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.582402   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.582415   21003 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:56.582424   21003 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:06:56.582439   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:06:56.582563   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.582717   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	W0829 18:06:56.583947   21003 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35106->192.168.39.43:22: read: connection reset by peer
	I0829 18:06:56.583981   21003 retry.go:31] will retry after 265.336769ms: ssh: handshake failed: read tcp 192.168.39.1:35106->192.168.39.43:22: read: connection reset by peer
	I0829 18:06:56.585697   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.586161   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:06:56.586192   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:06:56.586351   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:06:56.586491   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:06:56.586629   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:06:56.586736   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	W0829 18:06:56.607131   21003 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35120->192.168.39.43:22: read: connection reset by peer
	I0829 18:06:56.607153   21003 retry.go:31] will retry after 305.774806ms: ssh: handshake failed: read tcp 192.168.39.1:35120->192.168.39.43:22: read: connection reset by peer
	I0829 18:06:56.875799   21003 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:06:56.875873   21003 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:06:56.927872   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:56.928816   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:57.008376   21003 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:06:57.008396   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:06:57.014179   21003 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:06:57.014203   21003 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:06:57.027140   21003 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:06:57.027167   21003 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:06:57.043157   21003 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:06:57.043177   21003 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:06:57.070356   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:57.099182   21003 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:06:57.099201   21003 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:06:57.138825   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:06:57.138848   21003 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:06:57.151051   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:57.190016   21003 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:06:57.190037   21003 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:06:57.210335   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:06:57.210355   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:06:57.221961   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:57.270521   21003 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:57.270543   21003 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:06:57.315049   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:57.332317   21003 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:06:57.332343   21003 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:06:57.365240   21003 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:06:57.365263   21003 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:06:57.370347   21003 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:57.370362   21003 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:06:57.413086   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:06:57.413118   21003 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:06:57.414407   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:06:57.414426   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:06:57.436369   21003 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:57.436388   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:06:57.485961   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:06:57.524473   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:57.562208   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:06:57.563959   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:57.571757   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:06:57.571776   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:06:57.587934   21003 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:06:57.587954   21003 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:06:57.667126   21003 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:06:57.667154   21003 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:06:57.696933   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:06:57.696960   21003 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:06:57.697118   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:06:57.697134   21003 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:06:57.826566   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:06:57.826587   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:06:57.883248   21003 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:06:57.883276   21003 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:06:57.928373   21003 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:57.928400   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:06:57.998581   21003 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:57.998607   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:06:58.183428   21003 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:06:58.183455   21003 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:06:58.241042   21003 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:06:58.241068   21003 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:06:58.256257   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:06:58.316439   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:06:58.443343   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:06:58.443364   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:06:58.445449   21003 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:06:58.445468   21003 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:06:58.660398   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:06:58.660424   21003 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:06:58.662312   21003 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.786403949s)
	I0829 18:06:58.662328   21003 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.786494537s)
	I0829 18:06:58.662342   21003 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0829 18:06:58.663018   21003 node_ready.go:35] waiting up to 6m0s for node "addons-647117" to be "Ready" ...
	I0829 18:06:58.666067   21003 node_ready.go:49] node "addons-647117" has status "Ready":"True"
	I0829 18:06:58.666084   21003 node_ready.go:38] duration metric: took 3.048985ms for node "addons-647117" to be "Ready" ...
	I0829 18:06:58.666106   21003 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:06:58.676217   21003 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace to be "Ready" ...
	I0829 18:06:58.801455   21003 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:58.801477   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:06:58.995484   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:06:59.015898   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:06:59.015928   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:06:59.185715   21003 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-647117" context rescaled to 1 replicas
	I0829 18:06:59.282748   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:06:59.282771   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:06:59.559451   21003 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:06:59.559475   21003 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:06:59.736185   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:07:00.724928   21003 pod_ready.go:103] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:01.060208   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.13229736s)
	I0829 18:07:01.060262   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.131426124s)
	I0829 18:07:01.060266   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060279   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060285   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060293   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060306   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.989913885s)
	I0829 18:07:01.060348   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060367   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060369   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.838385594s)
	I0829 18:07:01.060384   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060397   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060352   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.909277018s)
	I0829 18:07:01.060452   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060461   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060780   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.060786   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.060796   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.060800   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.060805   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060813   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060816   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.060836   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.060843   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.060850   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.060857   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.060978   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.061004   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.061014   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.061023   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.061246   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.061254   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.061263   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.061270   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.061525   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.061547   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.061554   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.061561   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.061577   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.061791   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.061812   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.061818   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.062559   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.062587   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.062611   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.062618   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.062830   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.062864   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.062872   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.063136   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.063173   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.063180   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.063261   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.063273   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.238880   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.238905   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.239324   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.239339   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.239337   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.571208   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.256119707s)
	I0829 18:07:01.571266   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.571285   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.571510   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.571527   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.571536   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.571543   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.571811   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.571832   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.571841   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.681468   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.681491   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.681800   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.681893   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.681905   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.979228   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.49321647s)
	I0829 18:07:01.979257   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.454750161s)
	I0829 18:07:01.979274   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979291   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.979292   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979305   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.979329   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.417089396s)
	I0829 18:07:01.979375   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979389   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.979660   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.979674   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.979683   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979691   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.979700   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.979728   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.979734   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.979747   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.979761   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.979769   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.980006   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.980037   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.980048   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.980050   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:01.980086   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.980094   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.980103   21003 addons.go:475] Verifying addon registry=true in "addons-647117"
	I0829 18:07:01.980373   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.980385   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.980394   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:01.980402   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:01.981457   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:01.981470   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:01.981480   21003 addons.go:475] Verifying addon metrics-server=true in "addons-647117"
	I0829 18:07:01.982538   21003 out.go:177] * Verifying registry addon...
	I0829 18:07:01.984946   21003 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:07:02.031640   21003 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:07:02.031663   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:02.525184   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.000875   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.183701   21003 pod_ready.go:103] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:03.491799   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:03.593792   21003 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:07:03.593832   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:07:03.597360   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:07:03.597814   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:07:03.597845   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:07:03.598025   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:07:03.598268   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:07:03.598470   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:07:03.598664   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:07:03.833461   21003 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:07:03.876546   21003 addons.go:234] Setting addon gcp-auth=true in "addons-647117"
	I0829 18:07:03.876598   21003 host.go:66] Checking if "addons-647117" exists ...
	I0829 18:07:03.876890   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:07:03.876915   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:07:03.892569   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45021
	I0829 18:07:03.893039   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:07:03.893483   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:07:03.893502   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:07:03.893860   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:07:03.894349   21003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:07:03.894372   21003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:07:03.908630   21003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0829 18:07:03.909028   21003 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:07:03.909510   21003 main.go:141] libmachine: Using API Version  1
	I0829 18:07:03.909530   21003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:07:03.909878   21003 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:07:03.910100   21003 main.go:141] libmachine: (addons-647117) Calling .GetState
	I0829 18:07:03.911780   21003 main.go:141] libmachine: (addons-647117) Calling .DriverName
	I0829 18:07:03.912019   21003 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:07:03.912041   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHHostname
	I0829 18:07:03.914511   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:07:03.914935   21003 main.go:141] libmachine: (addons-647117) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:0d:0e", ip: ""} in network mk-addons-647117: {Iface:virbr1 ExpiryTime:2024-08-29 19:06:28 +0000 UTC Type:0 Mac:52:54:00:b2:0d:0e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:addons-647117 Clientid:01:52:54:00:b2:0d:0e}
	I0829 18:07:03.914960   21003 main.go:141] libmachine: (addons-647117) DBG | domain addons-647117 has defined IP address 192.168.39.43 and MAC address 52:54:00:b2:0d:0e in network mk-addons-647117
	I0829 18:07:03.915116   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHPort
	I0829 18:07:03.915301   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHKeyPath
	I0829 18:07:03.915464   21003 main.go:141] libmachine: (addons-647117) Calling .GetSSHUsername
	I0829 18:07:03.915620   21003 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/addons-647117/id_rsa Username:docker}
	I0829 18:07:04.022481   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.501297   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:04.735718   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.17172825s)
	I0829 18:07:04.735757   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.735766   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.735865   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.479566427s)
	W0829 18:07:04.735914   21003 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:07:04.735926   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.419451964s)
	I0829 18:07:04.735958   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.735981   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.735976   21003 retry.go:31] will retry after 229.112003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:07:04.736053   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736066   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736077   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.736085   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.736150   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.740634409s)
	I0829 18:07:04.736182   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736194   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736197   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.736211   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.736215   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.736300   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:04.736221   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.736347   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736362   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736373   21003 addons.go:475] Verifying addon ingress=true in "addons-647117"
	I0829 18:07:04.736675   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:04.736697   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:04.736704   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736712   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736800   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.736819   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.736832   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:04.736840   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:04.737121   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:04.737148   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:04.737155   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:04.739047   21003 out.go:177] * Verifying ingress addon...
	I0829 18:07:04.739055   21003 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-647117 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:07:04.741307   21003 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:07:04.745091   21003 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:07:04.745106   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:04.965918   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:07:04.987862   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.250313   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.502670   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:05.726015   21003 pod_ready.go:103] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:05.763615   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:05.799116   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.062879943s)
	I0829 18:07:05.799136   21003 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.88709264s)
	I0829 18:07:05.799162   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:05.799177   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:05.799451   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:05.799474   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:05.799484   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:05.799493   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:05.799497   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:05.799758   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:05.799780   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:05.799790   21003 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-647117"
	I0829 18:07:05.799799   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:05.800504   21003 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:07:05.801286   21003 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:07:05.802603   21003 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:07:05.803538   21003 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:07:05.803551   21003 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:07:05.803578   21003 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:07:05.837611   21003 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:07:05.837635   21003 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:07:05.856926   21003 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:07:05.856951   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:05.886792   21003 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:07:05.886814   21003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:07:05.934598   21003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:07:06.250813   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.251110   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.348403   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.488440   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:06.745795   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:06.807735   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:06.996848   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.105783   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.139806103s)
	I0829 18:07:07.105829   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:07.105845   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:07.106137   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:07.107594   21003 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0829 18:07:07.107610   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:07.107623   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:07.107632   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:07.107958   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:07.107976   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:07.212977   21003 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.278337274s)
	I0829 18:07:07.213038   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:07.213058   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:07.213352   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:07.213372   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:07.213383   21003 main.go:141] libmachine: Making call to close driver server
	I0829 18:07:07.213390   21003 main.go:141] libmachine: (addons-647117) Calling .Close
	I0829 18:07:07.213624   21003 main.go:141] libmachine: (addons-647117) DBG | Closing plugin on server side
	I0829 18:07:07.213654   21003 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:07:07.213671   21003 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:07:07.215310   21003 addons.go:475] Verifying addon gcp-auth=true in "addons-647117"
	I0829 18:07:07.217287   21003 out.go:177] * Verifying gcp-auth addon...
	I0829 18:07:07.219398   21003 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:07:07.246816   21003 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:07:07.246836   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:07.309709   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.311474   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.490556   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:07.723447   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:07.746060   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:07.808691   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:07.989564   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.182573   21003 pod_ready.go:103] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:08.222445   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:08.245717   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.308826   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.489048   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:08.723297   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:08.745592   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:08.808123   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:08.989930   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.185160   21003 pod_ready.go:98] pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:07:08 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.43 HostIPs:[{IP:192.168.39.
43}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-29 18:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-29 18:07:01 +0000 UTC,FinishedAt:2024-08-29 18:07:06 +0000 UTC,ContainerID:cri-o://d59fbec3165e2f0968560f98561b26d4cbd8d5aed4f3902715da50a6f73f1485,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://d59fbec3165e2f0968560f98561b26d4cbd8d5aed4f3902715da50a6f73f1485 Started:0xc0027c21b0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002646e00} {Name:kube-api-access-fc2r9 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002646e10}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:07:09.185196   21003 pod_ready.go:82] duration metric: took 10.508944074s for pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace to be "Ready" ...
	E0829 18:07:09.185208   21003 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-4d9bn" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:07:08 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-29 18:06:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.43 HostIPs:[{IP:192.168.39.43}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-29 18:06:56 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-29 18:07:01 +0000 UTC,FinishedAt:2024-08-29 18:07:06 +0000 UTC,ContainerID:cri-o://d59fbec3165e2f0968560f98561b26d4cbd8d5aed4f3902715da50a6f73f1485,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://d59fbec3165e2f0968560f98561b26d4cbd8d5aed4f3902715da50a6f73f1485 Started:0xc0027c21b0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002646e00} {Name:kube-api-access-fc2r9 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc002646e10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0829 18:07:09.185217   21003 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nhhtz" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.192464   21003 pod_ready.go:93] pod "coredns-6f6b679f8f-nhhtz" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.192485   21003 pod_ready.go:82] duration metric: took 7.259302ms for pod "coredns-6f6b679f8f-nhhtz" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.192494   21003 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.198684   21003 pod_ready.go:93] pod "etcd-addons-647117" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.198704   21003 pod_ready.go:82] duration metric: took 6.204777ms for pod "etcd-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.198713   21003 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.203256   21003 pod_ready.go:93] pod "kube-apiserver-addons-647117" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.203273   21003 pod_ready.go:82] duration metric: took 4.55494ms for pod "kube-apiserver-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.203282   21003 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.207437   21003 pod_ready.go:93] pod "kube-controller-manager-addons-647117" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.207455   21003 pod_ready.go:82] duration metric: took 4.167044ms for pod "kube-controller-manager-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.207464   21003 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dptz4" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.223722   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:09.326499   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.326509   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.489972   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:09.580220   21003 pod_ready.go:93] pod "kube-proxy-dptz4" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.580245   21003 pod_ready.go:82] duration metric: took 372.774467ms for pod "kube-proxy-dptz4" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.580257   21003 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.726036   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:09.745103   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:09.808109   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:09.980305   21003 pod_ready.go:93] pod "kube-scheduler-addons-647117" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:09.980340   21003 pod_ready.go:82] duration metric: took 400.073461ms for pod "kube-scheduler-addons-647117" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:09.980352   21003 pod_ready.go:39] duration metric: took 11.314232535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:07:09.980374   21003 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:07:09.980445   21003 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:07:09.988253   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:10.029423   21003 api_server.go:72] duration metric: took 13.613993413s to wait for apiserver process to appear ...
	I0829 18:07:10.029447   21003 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:07:10.029482   21003 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0829 18:07:10.033725   21003 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0829 18:07:10.034999   21003 api_server.go:141] control plane version: v1.31.0
	I0829 18:07:10.035018   21003 api_server.go:131] duration metric: took 5.56499ms to wait for apiserver health ...
	I0829 18:07:10.035026   21003 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:07:10.188946   21003 system_pods.go:59] 18 kube-system pods found
	I0829 18:07:10.188982   21003 system_pods.go:61] "coredns-6f6b679f8f-nhhtz" [bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2] Running
	I0829 18:07:10.188990   21003 system_pods.go:61] "csi-hostpath-attacher-0" [442c8a1e-b851-4b2f-a39a-da8738074897] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:07:10.188996   21003 system_pods.go:61] "csi-hostpath-resizer-0" [fb7dfca7-b2eb-492b-934b-81a33c34709a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:07:10.189004   21003 system_pods.go:61] "csi-hostpathplugin-b2xkq" [e62b7174-47eb-4ff8-a1db-76f9936a924d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:07:10.189009   21003 system_pods.go:61] "etcd-addons-647117" [9f96c1c2-351b-4af4-9c9c-89ed5623670f] Running
	I0829 18:07:10.189013   21003 system_pods.go:61] "kube-apiserver-addons-647117" [035080d0-8ea6-4d22-9861-28b1129fdabb] Running
	I0829 18:07:10.189017   21003 system_pods.go:61] "kube-controller-manager-addons-647117" [937119a3-ad43-498c-8a11-10919cd3cf8c] Running
	I0829 18:07:10.189024   21003 system_pods.go:61] "kube-ingress-dns-minikube" [a9a425c2-2fd3-4e62-be25-f26a8f87ddd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 18:07:10.189030   21003 system_pods.go:61] "kube-proxy-dptz4" [9a386c43-bd19-4ba5-a2be-6c0019adeedd] Running
	I0829 18:07:10.189035   21003 system_pods.go:61] "kube-scheduler-addons-647117" [159e6309-ac85-43f4-9c40-f6bf4ccb7035] Running
	I0829 18:07:10.189042   21003 system_pods.go:61] "metrics-server-8988944d9-9pvr6" [3d5398d7-70c3-47b5-8cb8-da262a7c5736] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:07:10.189050   21003 system_pods.go:61] "nvidia-device-plugin-daemonset-dlhxf" [ed192022-4f02-4de0-98b0-3c54ba3a49e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 18:07:10.189060   21003 system_pods.go:61] "registry-6fb4cdfc84-25kkf" [cc4a9ea4-4575-4df4-a260-191792ddc309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 18:07:10.189068   21003 system_pods.go:61] "registry-proxy-xqhqg" [dae462a3-dc8d-436d-8360-ee8d164ab845] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:07:10.189079   21003 system_pods.go:61] "snapshot-controller-56fcc65765-kgrh6" [1f305fc4-1a8a-47d0-bb41-7c8f77b1459c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.189085   21003 system_pods.go:61] "snapshot-controller-56fcc65765-kpgzh" [62b317a2-39aa-4da5-a04b-a97a0c67f06e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.189090   21003 system_pods.go:61] "storage-provisioner" [abb10014-4a67-4ddf-ba6b-89598283be68] Running
	I0829 18:07:10.189099   21003 system_pods.go:61] "tiller-deploy-b48cc5f79-bz7cs" [29de8757-9c38-4526-a266-586cd80d8d3b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0829 18:07:10.189105   21003 system_pods.go:74] duration metric: took 154.074157ms to wait for pod list to return data ...
	I0829 18:07:10.189116   21003 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:07:10.222838   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:10.247273   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.309243   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:10.380898   21003 default_sa.go:45] found service account: "default"
	I0829 18:07:10.380924   21003 default_sa.go:55] duration metric: took 191.802984ms for default service account to be created ...
	I0829 18:07:10.380932   21003 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:07:10.488590   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:10.584828   21003 system_pods.go:86] 18 kube-system pods found
	I0829 18:07:10.584854   21003 system_pods.go:89] "coredns-6f6b679f8f-nhhtz" [bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2] Running
	I0829 18:07:10.584864   21003 system_pods.go:89] "csi-hostpath-attacher-0" [442c8a1e-b851-4b2f-a39a-da8738074897] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:07:10.584871   21003 system_pods.go:89] "csi-hostpath-resizer-0" [fb7dfca7-b2eb-492b-934b-81a33c34709a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:07:10.584878   21003 system_pods.go:89] "csi-hostpathplugin-b2xkq" [e62b7174-47eb-4ff8-a1db-76f9936a924d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:07:10.584883   21003 system_pods.go:89] "etcd-addons-647117" [9f96c1c2-351b-4af4-9c9c-89ed5623670f] Running
	I0829 18:07:10.584888   21003 system_pods.go:89] "kube-apiserver-addons-647117" [035080d0-8ea6-4d22-9861-28b1129fdabb] Running
	I0829 18:07:10.584893   21003 system_pods.go:89] "kube-controller-manager-addons-647117" [937119a3-ad43-498c-8a11-10919cd3cf8c] Running
	I0829 18:07:10.584902   21003 system_pods.go:89] "kube-ingress-dns-minikube" [a9a425c2-2fd3-4e62-be25-f26a8f87ddd1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 18:07:10.584907   21003 system_pods.go:89] "kube-proxy-dptz4" [9a386c43-bd19-4ba5-a2be-6c0019adeedd] Running
	I0829 18:07:10.584913   21003 system_pods.go:89] "kube-scheduler-addons-647117" [159e6309-ac85-43f4-9c40-f6bf4ccb7035] Running
	I0829 18:07:10.584924   21003 system_pods.go:89] "metrics-server-8988944d9-9pvr6" [3d5398d7-70c3-47b5-8cb8-da262a7c5736] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:07:10.584935   21003 system_pods.go:89] "nvidia-device-plugin-daemonset-dlhxf" [ed192022-4f02-4de0-98b0-3c54ba3a49e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 18:07:10.584945   21003 system_pods.go:89] "registry-6fb4cdfc84-25kkf" [cc4a9ea4-4575-4df4-a260-191792ddc309] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 18:07:10.584950   21003 system_pods.go:89] "registry-proxy-xqhqg" [dae462a3-dc8d-436d-8360-ee8d164ab845] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:07:10.584955   21003 system_pods.go:89] "snapshot-controller-56fcc65765-kgrh6" [1f305fc4-1a8a-47d0-bb41-7c8f77b1459c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.584965   21003 system_pods.go:89] "snapshot-controller-56fcc65765-kpgzh" [62b317a2-39aa-4da5-a04b-a97a0c67f06e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:10.584969   21003 system_pods.go:89] "storage-provisioner" [abb10014-4a67-4ddf-ba6b-89598283be68] Running
	I0829 18:07:10.584975   21003 system_pods.go:89] "tiller-deploy-b48cc5f79-bz7cs" [29de8757-9c38-4526-a266-586cd80d8d3b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0829 18:07:10.584984   21003 system_pods.go:126] duration metric: took 204.046778ms to wait for k8s-apps to be running ...
	I0829 18:07:10.584994   21003 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:07:10.585045   21003 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:07:10.626258   21003 system_svc.go:56] duration metric: took 41.254313ms WaitForService to wait for kubelet
	I0829 18:07:10.626292   21003 kubeadm.go:582] duration metric: took 14.210866708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:07:10.626318   21003 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:07:10.723351   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:10.745625   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:10.780607   21003 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:07:10.780633   21003 node_conditions.go:123] node cpu capacity is 2
	I0829 18:07:10.780645   21003 node_conditions.go:105] duration metric: took 154.321354ms to run NodePressure ...
	I0829 18:07:10.780656   21003 start.go:241] waiting for startup goroutines ...
	I0829 18:07:10.808661   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.432004   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:11.432056   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:11.432507   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.432753   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.531343   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:11.722334   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:11.746103   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.808992   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:11.988778   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:12.224840   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:12.245531   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.307880   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.488647   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:12.723996   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:12.745184   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.808714   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.988428   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:13.223147   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:13.245839   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.308973   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.875413   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.875496   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:13.875555   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.875916   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:13.988310   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:14.223406   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:14.246021   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.308758   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.489231   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:14.723115   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:14.750809   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.848451   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.989629   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:15.223214   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:15.245568   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.307971   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.488573   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:15.724020   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:15.747296   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.808899   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.989134   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:16.223214   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:16.245841   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.308609   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.489231   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:16.722831   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:16.745495   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.807750   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.988112   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:17.223152   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:17.245700   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.308534   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.490053   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:17.722271   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:17.745672   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.808093   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.989536   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:18.223076   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:18.245676   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.308003   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.488710   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:18.724041   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:18.745187   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.808284   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.988906   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:19.222566   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:19.246507   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.307703   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.488524   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:19.723848   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:19.744936   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.807986   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.989362   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:20.223136   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:20.245701   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.308166   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.488793   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:20.722701   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:20.744935   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.807920   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.989378   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:21.223255   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:21.245626   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.307716   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.488497   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:21.722746   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:21.744978   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.808369   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.989361   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:22.223301   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:22.245645   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.307754   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.488146   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:22.724753   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:22.745129   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.817804   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.989553   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:23.223526   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:23.245605   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.308356   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.488772   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:23.723300   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:23.745589   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.807597   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.988552   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:24.223387   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:24.245787   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.308121   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.489472   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:24.723639   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:24.744866   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.814322   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.989050   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:25.223626   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:25.244872   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.308113   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.489018   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:25.723187   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:25.745594   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.808380   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.990284   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:26.223467   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:26.246478   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.311430   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.489100   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:26.723298   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:26.745982   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.808347   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.989395   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:27.223619   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:27.244802   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.308288   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.488267   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:27.723514   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:27.745730   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.807863   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.989687   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:28.223318   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:28.245983   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.308333   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.488782   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:28.722485   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:28.745638   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.808513   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.991921   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:29.222789   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:29.245435   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.308533   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:29.488400   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:29.723378   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:29.745288   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.807764   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:29.989287   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:30.223850   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:30.245679   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.307898   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:30.488344   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:30.723583   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:30.745909   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.808358   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:30.989347   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:31.223420   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:31.245676   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.308106   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:31.489548   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:31.723984   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:31.752426   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.808206   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:31.988904   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:32.222648   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:32.245333   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.307744   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:32.488573   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:32.724105   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:32.825629   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.825917   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:32.989527   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:33.223029   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:33.245355   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.308032   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:33.490376   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:33.722861   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:33.745432   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.808944   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:33.992715   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:34.223303   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:34.245804   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.308469   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:34.489113   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:34.722859   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:34.745014   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.809535   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:34.990897   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:35.223016   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:35.245393   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.307861   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:35.489500   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:35.724153   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:35.745295   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.808675   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:35.992470   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:36.224494   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:36.245850   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.308073   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:36.488905   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:36.723280   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:36.745428   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.807550   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:36.989313   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:37.223233   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:37.246873   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.309007   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:37.489533   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:37.723538   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:37.745569   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.809432   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:37.989055   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:38.223047   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:38.245660   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.308142   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:38.488344   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:38.723366   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:38.745351   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.808393   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:38.988503   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:39.223854   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:39.245533   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.307984   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:39.488928   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:39.722252   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:39.746300   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.808576   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:39.989080   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:40.223015   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:40.245885   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.324651   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:40.489080   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:40.722990   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:40.745516   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.808575   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:40.988689   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:41.223013   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:41.245430   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:41.308188   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:41.489125   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:41.723598   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:41.744926   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:41.808306   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:41.989614   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:42.224132   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:42.245427   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:42.307702   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:42.489328   21003 kapi.go:107] duration metric: took 40.504379034s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:07:42.723558   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:42.745851   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:42.808681   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:43.497177   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:43.497724   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:43.497761   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:43.722981   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:43.745692   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:43.807475   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.222828   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:44.245874   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:44.325234   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.723309   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:44.745739   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:44.807721   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:45.223946   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:45.245318   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:45.309088   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:45.723267   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:45.745838   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:45.808262   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:46.223279   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:46.245972   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.308455   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:46.722988   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:46.745976   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.808159   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:47.223759   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:47.245074   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:47.308591   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:47.723579   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:47.746171   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:47.808847   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:48.223841   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:48.245152   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:48.309348   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:48.722985   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:48.745588   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:48.808431   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:49.223107   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:49.245680   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:49.308240   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:49.723337   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:49.745413   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:49.807755   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:50.223677   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:50.245190   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:50.308677   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:50.723917   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:50.745139   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:50.808544   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:51.223080   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:51.245425   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:51.308106   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:51.723688   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:51.746081   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:51.808225   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:52.223806   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:52.326377   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:52.327351   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.725059   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:52.826530   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.826759   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:53.228476   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:53.245760   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:53.309747   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:53.722617   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:53.746004   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:53.808430   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:54.517283   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:54.517839   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:54.518018   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:54.723061   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:54.746186   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:54.811981   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:55.222608   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:55.246316   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:55.308886   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:55.722235   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:55.745334   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.019434   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:56.223858   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:56.245409   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.307995   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:56.722626   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:56.745974   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.808140   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:57.223268   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:57.256102   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:57.308364   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:57.726325   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:57.745974   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:57.808877   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:58.223559   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:58.246847   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:58.312157   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:58.727333   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:58.746318   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:58.808148   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:59.222345   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:59.245913   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:59.307531   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:59.722489   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:07:59.745604   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:59.807676   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:00.271245   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:00.272539   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:00.308316   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:00.723754   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:00.745187   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:00.807594   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:01.223141   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:01.245994   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:01.308389   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:01.723190   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:01.745545   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:01.807926   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:02.570569   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:02.571356   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:02.571633   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:02.724397   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:02.747272   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:02.826148   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:03.223815   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:03.246608   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:03.307864   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:03.726393   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:03.828835   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:03.828904   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:04.223011   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:04.245511   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:04.308195   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:04.723188   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:04.745550   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:04.807502   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:05.223443   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:05.246051   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:05.308712   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:05.723117   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:05.745574   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:05.808834   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:06.226761   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:06.245664   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:06.307618   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:06.725180   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:06.748981   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:06.808801   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:07.226928   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:07.245835   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:07.308980   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:07.722723   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:07.745324   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:07.807345   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:08.223879   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:08.325379   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:08.325434   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:08.725790   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:08.744949   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:08.826386   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:09.223279   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:09.246040   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:09.308012   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:09.723363   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:09.746259   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:09.809000   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:10.222946   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:10.252397   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:10.326511   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:10.726046   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:10.746259   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:10.809839   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:11.223348   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:11.246062   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:11.309338   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:11.728846   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:11.749115   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:11.809623   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:12.225216   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:12.246889   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:12.308657   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:12.724225   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:12.746449   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:12.809246   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:13.224804   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:13.247079   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:13.325658   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:13.723793   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:13.745266   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:13.807779   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:14.222598   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:14.244733   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:14.308124   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:14.728165   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:14.746139   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:14.808642   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:15.223457   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:15.246721   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:15.308556   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:15.933232   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:15.936608   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:15.936821   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:16.223056   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:16.245394   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:16.307894   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:16.722613   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:16.745393   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:16.808036   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:17.224002   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:17.245283   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:17.327819   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:17.725793   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:17.744806   21003 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:17.808170   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:18.227738   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:18.245282   21003 kapi.go:107] duration metric: took 1m13.503976561s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:08:18.329111   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:18.787939   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:18.807754   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:19.222198   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:19.308444   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:19.723855   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:19.808045   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:20.222926   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:20.307854   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:20.723764   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:20.826135   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:21.222994   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:21.307673   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:21.722977   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:21.807653   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:22.432663   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:22.432991   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:22.723932   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:22.825185   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:23.226536   21003 kapi.go:107] duration metric: took 1m16.007133625s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:08:23.228553   21003 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-647117 cluster.
	I0829 18:08:23.229841   21003 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:08:23.231235   21003 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:08:23.309308   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:23.809205   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:24.309098   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:24.808683   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:25.307456   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:25.810519   21003 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:26.308581   21003 kapi.go:107] duration metric: took 1m20.505001944s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:08:26.310411   21003 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, default-storageclass, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0829 18:08:26.311643   21003 addons.go:510] duration metric: took 1m29.89618082s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns default-storageclass storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0829 18:08:26.311695   21003 start.go:246] waiting for cluster config update ...
	I0829 18:08:26.311717   21003 start.go:255] writing updated cluster config ...
	I0829 18:08:26.311981   21003 ssh_runner.go:195] Run: rm -f paused
	I0829 18:08:26.363273   21003 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:08:26.365265   21003 out.go:177] * Done! kubectl is now configured to use "addons-647117" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.099186151Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955691099160635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a0ede56-a8a5-4e78-ada3-5095d828faa0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.104382875Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6570ca5f-8fdb-41b4-992d-8054f80eb300 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.104447623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6570ca5f-8fdb-41b4-992d-8054f80eb300 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.104750213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6bc6a10e76549f7d587555f7a54d16837d032d8ecf00ee7b48f618079af9c28,PodSandboxId:eed98502443b793d9be197b100a8dcb16a0e902d37479157d0015b4ac1ff4d64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724955606109086410,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-q67c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17c41e7b-a4ec-4663-bdf0-b1b2832a432d,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc896fe7c39afeb26e09ebb76903fe8afb90db62961017e3270183ffdcb6722,PodSandboxId:6576b025b47bc783db4d350110b96229032b311f33b80ad55a34aac8f689c1f6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724955466642415420,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5146adcd-04b5-44c5-bbda-6d831cc2420c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4f5014c540fcbb54a865fe8a20b52c6d717391d2293533251392ffe4eb0c489,PodSandboxId:9876705b70ba7858bc4fe710ca59be141bb47029ff0aff7d766c732718d0457a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1724955455292267013,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-jmjhc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b,PodSandboxId:56c18ca1bdb712fc2d36e56135d155949699995097b1d9b0845ba2e05c267bc2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724954902493807072,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j924p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 136c3da7-f196-4a84-9c08-9186bd2f8698,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b634523ff8d16508d32c6c9e41494a534b970bb9c92b6c6137a7c0bf1b7a95e,PodSandboxId:55d4a995519c03224e78b3aae7fb2a4e283b8e4f033c404157b6ad5ccaeae0c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONT
AINER_RUNNING,CreatedAt:1724954854895903728,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9pvr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5398d7-70c3-47b5-8cb8-da262a7c5736,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747,PodSandboxId:d2641f267147c021f101934a92f06ea44da3305073b26cf0734e3e5ee6c070d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724954822396476667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb10014-4a67-4ddf-ba6b-89598283be68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c,PodSandboxId:29673979fe79f5b16a977fc33807a5f1f34e8ff57d6f4fcdcf501ea98ff5d77d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7
bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724954819708824056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nhhtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238,PodSandboxId:ca373cf48871db0ffa7cf5f14c758639c6b5004bf3e7803ff9b67d34d91ebc78,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724954817536689832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dptz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a386c43-bd19-4ba5-a2be-6c0019adeedd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d,PodSandboxId:f1139b5439166add471956c552e948059493ccc23613d6b5ff9a9e95250c645d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0
,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724954805726750642,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69b030b129e3e87a334b1ddda886bebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef,PodSandboxId:63b0cbde37a9d3c11e177c2407a492016edf721ee031186475db9a6d433d914b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:
0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724954805708156025,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8dd63dd4052ba5509ca7e97f4edf66,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b,PodSandboxId:2b4c41aeae94044ae15f32459124342629d709dbfaecad1ac3dc578913c860d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageS
pec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724954805688104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17188c36fd60880dcc736dbc36b3343b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7,PodSandboxId:905af1fd51ac949e666ce8db448b384f88bb0d208bd01e6b2bcd34dcfbb88535,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d2
6915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724954805668036808,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348c904ca755cb9b42a3678406195788,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6570ca5f-8fdb-41b4-992d-8054f80eb300 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.140181811Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0eac1aa1-e035-4253-8fcc-24594d8dfce8 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.140261810Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0eac1aa1-e035-4253-8fcc-24594d8dfce8 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.141845712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d350e06f-b303-431d-9d36-e444e2393b80 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.143080945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955691143054125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d350e06f-b303-431d-9d36-e444e2393b80 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.143748490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d508b0e-1794-44c8-86c2-a2a84615074c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.143810299Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d508b0e-1794-44c8-86c2-a2a84615074c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.144080785Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6bc6a10e76549f7d587555f7a54d16837d032d8ecf00ee7b48f618079af9c28,PodSandboxId:eed98502443b793d9be197b100a8dcb16a0e902d37479157d0015b4ac1ff4d64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724955606109086410,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-q67c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17c41e7b-a4ec-4663-bdf0-b1b2832a432d,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc896fe7c39afeb26e09ebb76903fe8afb90db62961017e3270183ffdcb6722,PodSandboxId:6576b025b47bc783db4d350110b96229032b311f33b80ad55a34aac8f689c1f6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724955466642415420,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5146adcd-04b5-44c5-bbda-6d831cc2420c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4f5014c540fcbb54a865fe8a20b52c6d717391d2293533251392ffe4eb0c489,PodSandboxId:9876705b70ba7858bc4fe710ca59be141bb47029ff0aff7d766c732718d0457a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1724955455292267013,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-jmjhc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b,PodSandboxId:56c18ca1bdb712fc2d36e56135d155949699995097b1d9b0845ba2e05c267bc2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724954902493807072,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j924p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 136c3da7-f196-4a84-9c08-9186bd2f8698,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b634523ff8d16508d32c6c9e41494a534b970bb9c92b6c6137a7c0bf1b7a95e,PodSandboxId:55d4a995519c03224e78b3aae7fb2a4e283b8e4f033c404157b6ad5ccaeae0c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONT
AINER_RUNNING,CreatedAt:1724954854895903728,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9pvr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5398d7-70c3-47b5-8cb8-da262a7c5736,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747,PodSandboxId:d2641f267147c021f101934a92f06ea44da3305073b26cf0734e3e5ee6c070d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724954822396476667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb10014-4a67-4ddf-ba6b-89598283be68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c,PodSandboxId:29673979fe79f5b16a977fc33807a5f1f34e8ff57d6f4fcdcf501ea98ff5d77d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7
bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724954819708824056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nhhtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238,PodSandboxId:ca373cf48871db0ffa7cf5f14c758639c6b5004bf3e7803ff9b67d34d91ebc78,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724954817536689832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dptz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a386c43-bd19-4ba5-a2be-6c0019adeedd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d,PodSandboxId:f1139b5439166add471956c552e948059493ccc23613d6b5ff9a9e95250c645d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0
,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724954805726750642,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69b030b129e3e87a334b1ddda886bebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef,PodSandboxId:63b0cbde37a9d3c11e177c2407a492016edf721ee031186475db9a6d433d914b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:
0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724954805708156025,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8dd63dd4052ba5509ca7e97f4edf66,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b,PodSandboxId:2b4c41aeae94044ae15f32459124342629d709dbfaecad1ac3dc578913c860d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageS
pec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724954805688104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17188c36fd60880dcc736dbc36b3343b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7,PodSandboxId:905af1fd51ac949e666ce8db448b384f88bb0d208bd01e6b2bcd34dcfbb88535,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d2
6915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724954805668036808,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348c904ca755cb9b42a3678406195788,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d508b0e-1794-44c8-86c2-a2a84615074c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.176218274Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8dc119b3-1959-4ea8-917d-61e41a8e3375 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.176290656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8dc119b3-1959-4ea8-917d-61e41a8e3375 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.177261695Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18ddf863-b403-4962-8c4c-57c237fedd7f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.179593691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955691179567435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18ddf863-b403-4962-8c4c-57c237fedd7f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.180093092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=765bc409-8a04-407b-ae6f-ee0afe6a8a68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.180144670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=765bc409-8a04-407b-ae6f-ee0afe6a8a68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.180459616Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6bc6a10e76549f7d587555f7a54d16837d032d8ecf00ee7b48f618079af9c28,PodSandboxId:eed98502443b793d9be197b100a8dcb16a0e902d37479157d0015b4ac1ff4d64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724955606109086410,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-q67c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17c41e7b-a4ec-4663-bdf0-b1b2832a432d,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc896fe7c39afeb26e09ebb76903fe8afb90db62961017e3270183ffdcb6722,PodSandboxId:6576b025b47bc783db4d350110b96229032b311f33b80ad55a34aac8f689c1f6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724955466642415420,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5146adcd-04b5-44c5-bbda-6d831cc2420c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4f5014c540fcbb54a865fe8a20b52c6d717391d2293533251392ffe4eb0c489,PodSandboxId:9876705b70ba7858bc4fe710ca59be141bb47029ff0aff7d766c732718d0457a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1724955455292267013,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-jmjhc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b,PodSandboxId:56c18ca1bdb712fc2d36e56135d155949699995097b1d9b0845ba2e05c267bc2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724954902493807072,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j924p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 136c3da7-f196-4a84-9c08-9186bd2f8698,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b634523ff8d16508d32c6c9e41494a534b970bb9c92b6c6137a7c0bf1b7a95e,PodSandboxId:55d4a995519c03224e78b3aae7fb2a4e283b8e4f033c404157b6ad5ccaeae0c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONT
AINER_RUNNING,CreatedAt:1724954854895903728,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9pvr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5398d7-70c3-47b5-8cb8-da262a7c5736,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747,PodSandboxId:d2641f267147c021f101934a92f06ea44da3305073b26cf0734e3e5ee6c070d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724954822396476667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb10014-4a67-4ddf-ba6b-89598283be68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c,PodSandboxId:29673979fe79f5b16a977fc33807a5f1f34e8ff57d6f4fcdcf501ea98ff5d77d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7
bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724954819708824056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nhhtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238,PodSandboxId:ca373cf48871db0ffa7cf5f14c758639c6b5004bf3e7803ff9b67d34d91ebc78,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724954817536689832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dptz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a386c43-bd19-4ba5-a2be-6c0019adeedd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d,PodSandboxId:f1139b5439166add471956c552e948059493ccc23613d6b5ff9a9e95250c645d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0
,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724954805726750642,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69b030b129e3e87a334b1ddda886bebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef,PodSandboxId:63b0cbde37a9d3c11e177c2407a492016edf721ee031186475db9a6d433d914b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:
0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724954805708156025,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8dd63dd4052ba5509ca7e97f4edf66,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b,PodSandboxId:2b4c41aeae94044ae15f32459124342629d709dbfaecad1ac3dc578913c860d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageS
pec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724954805688104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17188c36fd60880dcc736dbc36b3343b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7,PodSandboxId:905af1fd51ac949e666ce8db448b384f88bb0d208bd01e6b2bcd34dcfbb88535,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d2
6915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724954805668036808,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348c904ca755cb9b42a3678406195788,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=765bc409-8a04-407b-ae6f-ee0afe6a8a68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.215173565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a68acb93-9338-41b7-a48d-9d3a6a21bb52 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.215263472Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a68acb93-9338-41b7-a48d-9d3a6a21bb52 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.216492713Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=281a7f35-b3dd-471c-bda2-faeeeff26079 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.217905886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955691217879501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=281a7f35-b3dd-471c-bda2-faeeeff26079 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.218456526Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f69b29e-e39c-4be4-b33b-dbee58ce50e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.218512949Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f69b29e-e39c-4be4-b33b-dbee58ce50e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:21:31 addons-647117 crio[663]: time="2024-08-29 18:21:31.218774611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6bc6a10e76549f7d587555f7a54d16837d032d8ecf00ee7b48f618079af9c28,PodSandboxId:eed98502443b793d9be197b100a8dcb16a0e902d37479157d0015b4ac1ff4d64,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724955606109086410,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-q67c7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17c41e7b-a4ec-4663-bdf0-b1b2832a432d,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc896fe7c39afeb26e09ebb76903fe8afb90db62961017e3270183ffdcb6722,PodSandboxId:6576b025b47bc783db4d350110b96229032b311f33b80ad55a34aac8f689c1f6,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724955466642415420,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5146adcd-04b5-44c5-bbda-6d831cc2420c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4f5014c540fcbb54a865fe8a20b52c6d717391d2293533251392ffe4eb0c489,PodSandboxId:9876705b70ba7858bc4fe710ca59be141bb47029ff0aff7d766c732718d0457a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1724955455292267013,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-jmjhc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b,PodSandboxId:56c18ca1bdb712fc2d36e56135d155949699995097b1d9b0845ba2e05c267bc2,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724954902493807072,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-89d5ffd79-j924p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 136c3da7-f196-4a84-9c08-9186bd2f8698,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b634523ff8d16508d32c6c9e41494a534b970bb9c92b6c6137a7c0bf1b7a95e,PodSandboxId:55d4a995519c03224e78b3aae7fb2a4e283b8e4f033c404157b6ad5ccaeae0c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONT
AINER_RUNNING,CreatedAt:1724954854895903728,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-9pvr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d5398d7-70c3-47b5-8cb8-da262a7c5736,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747,PodSandboxId:d2641f267147c021f101934a92f06ea44da3305073b26cf0734e3e5ee6c070d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724954822396476667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb10014-4a67-4ddf-ba6b-89598283be68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c,PodSandboxId:29673979fe79f5b16a977fc33807a5f1f34e8ff57d6f4fcdcf501ea98ff5d77d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7
bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724954819708824056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nhhtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1ec43c-bd73-44c0-9c15-e5bf4fbb32b2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238,PodSandboxId:ca373cf48871db0ffa7cf5f14c758639c6b5004bf3e7803ff9b67d34d91ebc78,Metadata:&ContainerMetadata{Name
:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724954817536689832,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dptz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a386c43-bd19-4ba5-a2be-6c0019adeedd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d,PodSandboxId:f1139b5439166add471956c552e948059493ccc23613d6b5ff9a9e95250c645d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0
,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724954805726750642,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69b030b129e3e87a334b1ddda886bebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef,PodSandboxId:63b0cbde37a9d3c11e177c2407a492016edf721ee031186475db9a6d433d914b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:
0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724954805708156025,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f8dd63dd4052ba5509ca7e97f4edf66,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b,PodSandboxId:2b4c41aeae94044ae15f32459124342629d709dbfaecad1ac3dc578913c860d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageS
pec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724954805688104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17188c36fd60880dcc736dbc36b3343b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7,PodSandboxId:905af1fd51ac949e666ce8db448b384f88bb0d208bd01e6b2bcd34dcfbb88535,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d2
6915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724954805668036808,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-647117,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348c904ca755cb9b42a3678406195788,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f69b29e-e39c-4be4-b33b-dbee58ce50e7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b6bc6a10e7654       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   eed98502443b7       hello-world-app-55bf9c44b4-q67c7
	dcc896fe7c39a       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         3 minutes ago        Running             nginx                     0                   6576b025b47bc       nginx
	c4f5014c540fc       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                   3 minutes ago        Running             headlamp                  0                   9876705b70ba7       headlamp-57fb76fcdb-jmjhc
	a814d0a183682       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago       Running             gcp-auth                  0                   56c18ca1bdb71       gcp-auth-89d5ffd79-j924p
	0b634523ff8d1       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   13 minutes ago       Running             metrics-server            0                   55d4a995519c0       metrics-server-8988944d9-9pvr6
	c7d6293cd5ae5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago       Running             storage-provisioner       0                   d2641f267147c       storage-provisioner
	43c5285b49b2b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        14 minutes ago       Running             coredns                   0                   29673979fe79f       coredns-6f6b679f8f-nhhtz
	20d8d4b2a5b99       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        14 minutes ago       Running             kube-proxy                0                   ca373cf48871d       kube-proxy-dptz4
	7109054cd9285       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        14 minutes ago       Running             kube-controller-manager   0                   f1139b5439166       kube-controller-manager-addons-647117
	3bbe72bf43966       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        14 minutes ago       Running             kube-scheduler            0                   63b0cbde37a9d       kube-scheduler-addons-647117
	e4037213915cc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        14 minutes ago       Running             kube-apiserver            0                   2b4c41aeae940       kube-apiserver-addons-647117
	ad53629527269       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        14 minutes ago       Running             etcd                      0                   905af1fd51ac9       etcd-addons-647117
	
	
	==> coredns [43c5285b49b2bddf73bb0aff1fa76e99b58ab907f9986e693cbbb01ce9b50b3c] <==
	[INFO] 127.0.0.1:40023 - 21501 "HINFO IN 2107751163851146271.7937220302157701423. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011076414s
	[INFO] 10.244.0.7:57388 - 3898 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00041242s
	[INFO] 10.244.0.7:57388 - 35385 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000160164s
	[INFO] 10.244.0.7:42181 - 16646 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000102891s
	[INFO] 10.244.0.7:42181 - 61211 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000143215s
	[INFO] 10.244.0.7:40451 - 5822 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096496s
	[INFO] 10.244.0.7:40451 - 10428 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000151048s
	[INFO] 10.244.0.7:50345 - 34777 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108547s
	[INFO] 10.244.0.7:50345 - 62175 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000123168s
	[INFO] 10.244.0.7:43363 - 59112 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00011089s
	[INFO] 10.244.0.7:43363 - 38637 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084266s
	[INFO] 10.244.0.7:43570 - 27914 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066159s
	[INFO] 10.244.0.7:43570 - 8968 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006745s
	[INFO] 10.244.0.7:51342 - 48058 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034576s
	[INFO] 10.244.0.7:51342 - 50108 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080216s
	[INFO] 10.244.0.7:55526 - 58103 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080655s
	[INFO] 10.244.0.7:55526 - 43765 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000491s
	[INFO] 10.244.0.22:59665 - 61483 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00046118s
	[INFO] 10.244.0.22:56522 - 61414 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001110678s
	[INFO] 10.244.0.22:56188 - 1457 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155671s
	[INFO] 10.244.0.22:42917 - 2402 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000399062s
	[INFO] 10.244.0.22:48780 - 50292 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000158469s
	[INFO] 10.244.0.22:43403 - 21131 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000069692s
	[INFO] 10.244.0.22:59530 - 50990 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001145169s
	[INFO] 10.244.0.22:57789 - 7865 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001496446s
	
	
	==> describe nodes <==
	Name:               addons-647117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-647117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=addons-647117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_06_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-647117
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:06:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-647117
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:21:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:20:28 +0000   Thu, 29 Aug 2024 18:06:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:20:28 +0000   Thu, 29 Aug 2024 18:06:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:20:28 +0000   Thu, 29 Aug 2024 18:06:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:20:28 +0000   Thu, 29 Aug 2024 18:06:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    addons-647117
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb2784d9f1e146b3adcb56f05f7d626c
	  System UUID:                eb2784d9-f1e1-46b3-adcb-56f05f7d626c
	  Boot ID:                    e13d5250-07a7-415d-bb34-b77c87eefe5b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-q67c7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  gcp-auth                    gcp-auth-89d5ffd79-j924p                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  headlamp                    headlamp-57fb76fcdb-jmjhc                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 coredns-6f6b679f8f-nhhtz                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 etcd-addons-647117                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-647117             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-647117    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-dptz4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-647117             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-8988944d9-9pvr6           100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         14m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node addons-647117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node addons-647117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node addons-647117 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node addons-647117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node addons-647117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node addons-647117 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m                kubelet          Node addons-647117 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node addons-647117 event: Registered Node addons-647117 in Controller
	
	
	==> dmesg <==
	[ +14.496686] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.231458] kauditd_printk_skb: 2 callbacks suppressed
	[Aug29 18:08] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.119850] kauditd_printk_skb: 65 callbacks suppressed
	[  +9.791316] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.274613] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.166700] kauditd_printk_skb: 51 callbacks suppressed
	[Aug29 18:09] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:11] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:13] kauditd_printk_skb: 28 callbacks suppressed
	[Aug29 18:16] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.960026] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.856149] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.076798] kauditd_printk_skb: 17 callbacks suppressed
	[Aug29 18:17] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.882088] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.437607] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.553101] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.346334] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.833680] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.005059] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.337864] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.949388] kauditd_printk_skb: 11 callbacks suppressed
	[Aug29 18:20] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.264042] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [ad536295272691478a3708a2e23c1a026a6603fcf3000e914d5e6e196db8e5c7] <==
	{"level":"warn","ts":"2024-08-29T18:08:15.916414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"365.890727ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-29T18:08:15.916450Z","caller":"traceutil/trace.go:171","msg":"trace[383738334] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1140; }","duration":"365.944011ms","start":"2024-08-29T18:08:15.550499Z","end":"2024-08-29T18:08:15.916443Z","steps":["trace[383738334] 'agreement among raft nodes before linearized reading'  (duration: 365.865295ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:15.916483Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:08:15.550459Z","time spent":"366.016618ms","remote":"127.0.0.1:37584","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":2,"response size":30,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "}
	{"level":"info","ts":"2024-08-29T18:08:15.915571Z","caller":"traceutil/trace.go:171","msg":"trace[1194422704] linearizableReadLoop","detail":"{readStateIndex:1171; appliedIndex:1170; }","duration":"365.049318ms","start":"2024-08-29T18:08:15.550504Z","end":"2024-08-29T18:08:15.915554Z","steps":["trace[1194422704] 'read index received'  (duration: 364.868874ms)","trace[1194422704] 'applied index is now lower than readState.Index'  (duration: 180.004µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T18:08:15.916898Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.484708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:15.916956Z","caller":"traceutil/trace.go:171","msg":"trace[83720747] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"207.515173ms","start":"2024-08-29T18:08:15.709401Z","end":"2024-08-29T18:08:15.916916Z","steps":["trace[83720747] 'agreement among raft nodes before linearized reading'  (duration: 207.462966ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:15.917503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.990133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:15.917550Z","caller":"traceutil/trace.go:171","msg":"trace[1271701390] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"186.041171ms","start":"2024-08-29T18:08:15.731500Z","end":"2024-08-29T18:08:15.917541Z","steps":["trace[1271701390] 'agreement among raft nodes before linearized reading'  (duration: 185.939215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:15.917854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.129824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:15.917882Z","caller":"traceutil/trace.go:171","msg":"trace[471033133] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"124.157063ms","start":"2024-08-29T18:08:15.793714Z","end":"2024-08-29T18:08:15.917871Z","steps":["trace[471033133] 'agreement among raft nodes before linearized reading'  (duration: 124.114367ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:08:22.406730Z","caller":"traceutil/trace.go:171","msg":"trace[351282553] linearizableReadLoop","detail":"{readStateIndex:1199; appliedIndex:1198; }","duration":"197.570563ms","start":"2024-08-29T18:08:22.209145Z","end":"2024-08-29T18:08:22.406715Z","steps":["trace[351282553] 'read index received'  (duration: 197.399929ms)","trace[351282553] 'applied index is now lower than readState.Index'  (duration: 170.126µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:08:22.407082Z","caller":"traceutil/trace.go:171","msg":"trace[1670518420] transaction","detail":"{read_only:false; response_revision:1166; number_of_response:1; }","duration":"347.190393ms","start":"2024-08-29T18:08:22.059878Z","end":"2024-08-29T18:08:22.407068Z","steps":["trace[1670518420] 'process raft request'  (duration: 346.707402ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:22.407202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:08:22.059865Z","time spent":"347.274505ms","remote":"127.0.0.1:37314","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":798,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-8988944d9-9pvr6.17f0454d6b25d4e0\" mod_revision:1131 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-8988944d9-9pvr6.17f0454d6b25d4e0\" value_size:704 lease:1009247904961359277 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-8988944d9-9pvr6.17f0454d6b25d4e0\" > >"}
	{"level":"warn","ts":"2024-08-29T18:08:22.414665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.166922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:22.414738Z","caller":"traceutil/trace.go:171","msg":"trace[241417199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1166; }","duration":"121.257071ms","start":"2024-08-29T18:08:22.293470Z","end":"2024-08-29T18:08:22.414727Z","steps":["trace[241417199] 'agreement among raft nodes before linearized reading'  (duration: 113.986108ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:22.414703Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.662655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-29T18:08:22.414842Z","caller":"traceutil/trace.go:171","msg":"trace[50687533] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:1166; }","duration":"193.845523ms","start":"2024-08-29T18:08:22.220985Z","end":"2024-08-29T18:08:22.414831Z","steps":["trace[50687533] 'agreement among raft nodes before linearized reading'  (duration: 186.452075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:22.414967Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.831006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:08:22.415002Z","caller":"traceutil/trace.go:171","msg":"trace[16339418] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1166; }","duration":"205.868124ms","start":"2024-08-29T18:08:22.209128Z","end":"2024-08-29T18:08:22.414996Z","steps":["trace[16339418] 'agreement among raft nodes before linearized reading'  (duration: 198.323343ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:08:57.579149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.73227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-647117\" ","response":"range_response_count:1 size:10787"}
	{"level":"info","ts":"2024-08-29T18:08:57.579235Z","caller":"traceutil/trace.go:171","msg":"trace[263122715] range","detail":"{range_begin:/registry/minions/addons-647117; range_end:; response_count:1; response_revision:1297; }","duration":"103.837782ms","start":"2024-08-29T18:08:57.475383Z","end":"2024-08-29T18:08:57.579221Z","steps":["trace[263122715] 'range keys from in-memory index tree'  (duration: 103.559511ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:16:47.751238Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1559}
	{"level":"info","ts":"2024-08-29T18:16:47.785191Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1559,"took":"33.367177ms","hash":750415669,"current-db-size-bytes":6561792,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3682304,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2024-08-29T18:16:47.785252Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":750415669,"revision":1559,"compact-revision":-1}
	{"level":"info","ts":"2024-08-29T18:17:34.532473Z","caller":"traceutil/trace.go:171","msg":"trace[1845162260] transaction","detail":"{read_only:false; response_revision:2387; number_of_response:1; }","duration":"292.595899ms","start":"2024-08-29T18:17:34.239840Z","end":"2024-08-29T18:17:34.532436Z","steps":["trace[1845162260] 'process raft request'  (duration: 292.224026ms)"],"step_count":1}
	
	
	==> gcp-auth [a814d0a183682984e3cb42ff088fc1ad052a1d6544e3a750e7a1e4313a22d94b] <==
	2024/08/29 18:08:26 Ready to write response ...
	2024/08/29 18:16:36 Ready to marshal response ...
	2024/08/29 18:16:36 Ready to write response ...
	2024/08/29 18:16:40 Ready to marshal response ...
	2024/08/29 18:16:40 Ready to write response ...
	2024/08/29 18:16:41 Ready to marshal response ...
	2024/08/29 18:16:41 Ready to write response ...
	2024/08/29 18:16:41 Ready to marshal response ...
	2024/08/29 18:16:41 Ready to write response ...
	2024/08/29 18:16:55 Ready to marshal response ...
	2024/08/29 18:16:55 Ready to write response ...
	2024/08/29 18:17:00 Ready to marshal response ...
	2024/08/29 18:17:00 Ready to write response ...
	2024/08/29 18:17:30 Ready to marshal response ...
	2024/08/29 18:17:30 Ready to write response ...
	2024/08/29 18:17:30 Ready to marshal response ...
	2024/08/29 18:17:30 Ready to write response ...
	2024/08/29 18:17:30 Ready to marshal response ...
	2024/08/29 18:17:30 Ready to write response ...
	2024/08/29 18:17:42 Ready to marshal response ...
	2024/08/29 18:17:42 Ready to write response ...
	2024/08/29 18:17:48 Ready to marshal response ...
	2024/08/29 18:17:48 Ready to write response ...
	2024/08/29 18:20:03 Ready to marshal response ...
	2024/08/29 18:20:03 Ready to write response ...
	
	
	==> kernel <==
	 18:21:31 up 15 min,  0 users,  load average: 0.10, 0.32, 0.39
	Linux addons-647117 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e4037213915cc653392983dc7fa728b8d9773a79ae1face6714b68dfa15ba02b] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0829 18:08:42.103726       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.189.204:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.189.204:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.189.204:443: connect: connection refused" logger="UnhandledError"
	I0829 18:08:42.141565       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0829 18:16:49.533111       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0829 18:17:11.753860       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0829 18:17:16.195581       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.195614       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:17:16.228724       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.228885       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:17:16.234104       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.234155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:17:16.247150       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.248440       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:17:16.358488       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:17:16.358534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 18:17:17.234989       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 18:17:17.361145       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0829 18:17:17.374275       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0829 18:17:30.375386       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.157.54"}
	I0829 18:17:42.080953       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 18:17:42.285810       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.192.244"}
	I0829 18:17:46.012916       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 18:17:47.112654       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 18:20:03.440940       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.118.57"}
	
	
	==> kube-controller-manager [7109054cd9285a933bfa11037fec7f25a869e01a38c43d8982fe1f0f3387077d] <==
	I0829 18:20:03.298492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.874µs"
	I0829 18:20:05.629574       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0829 18:20:05.640585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="8.753µs"
	I0829 18:20:05.650261       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0829 18:20:07.020251       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.679583ms"
	I0829 18:20:07.020361       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="68.93µs"
	I0829 18:20:15.770484       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0829 18:20:21.929674       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:20:21.929741       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:20:25.537078       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:20:25.537126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:20:28.692653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-647117"
	W0829 18:20:33.699098       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:20:33.699151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:20:37.924553       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:20:37.924675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:20:59.747244       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:20:59.747519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:21:11.588784       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:21:11.588844       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:21:15.172674       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:21:15.172720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:21:28.030492       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:21:28.030577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:21:30.196618       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="14.19µs"
	
	
	==> kube-proxy [20d8d4b2a5b996dd7b56657d3435a14a37b099b691cedbb68378eebf8715b238] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:06:58.152664       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:06:58.167873       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.43"]
	E0829 18:06:58.167951       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:06:58.245676       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:06:58.245739       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:06:58.245767       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:06:58.256186       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:06:58.256510       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:06:58.256522       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:06:58.261152       1 config.go:197] "Starting service config controller"
	I0829 18:06:58.261223       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:06:58.261753       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:06:58.261762       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:06:58.262346       1 config.go:326] "Starting node config controller"
	I0829 18:06:58.262355       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:06:58.362407       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:06:58.362425       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:06:58.362435       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3bbe72bf439667ac4646b14ce1641073e398ecd5a9b6b1fc9d41ef57605790ef] <==
	W0829 18:06:48.898836       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:48.898932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.798293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:06:49.798410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.798508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:49.798538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.801096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:49.801188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:49.811894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:06:49.811940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.065849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 18:06:50.065949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.089891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:06:50.089949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.116438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 18:06:50.116507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.133045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:06:50.133135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.145488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:06:50.145535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.150457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:50.150555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:50.390065       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:06:50.390353       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 18:06:52.182506       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 18:20:51 addons-647117 kubelet[1203]: E0829 18:20:51.446182    1203 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 18:20:51 addons-647117 kubelet[1203]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 18:20:51 addons-647117 kubelet[1203]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 18:20:51 addons-647117 kubelet[1203]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 18:20:51 addons-647117 kubelet[1203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 18:20:51 addons-647117 kubelet[1203]: E0829 18:20:51.868802    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955651868368284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:20:51 addons-647117 kubelet[1203]: E0829 18:20:51.868842    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955651868368284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:20:52 addons-647117 kubelet[1203]: I0829 18:20:52.882022    1203 scope.go:117] "RemoveContainer" containerID="4f617161977681299f053c902914987ca27a5748a56e02e0350d8ba6218ed00e"
	Aug 29 18:20:52 addons-647117 kubelet[1203]: I0829 18:20:52.906678    1203 scope.go:117] "RemoveContainer" containerID="62f40717dc5b978d3752fff6733f2948ce86bbab32c8d036e2bd5a34fa2553c0"
	Aug 29 18:21:01 addons-647117 kubelet[1203]: E0829 18:21:01.872126    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955661871741702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:01 addons-647117 kubelet[1203]: E0829 18:21:01.872172    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955661871741702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:02 addons-647117 kubelet[1203]: E0829 18:21:02.434185    1203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="df7618a9-c213-4e89-9b35-5a5530993d5a"
	Aug 29 18:21:11 addons-647117 kubelet[1203]: E0829 18:21:11.874742    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955671874369296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:11 addons-647117 kubelet[1203]: E0829 18:21:11.874778    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955671874369296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:15 addons-647117 kubelet[1203]: E0829 18:21:15.434125    1203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="df7618a9-c213-4e89-9b35-5a5530993d5a"
	Aug 29 18:21:21 addons-647117 kubelet[1203]: E0829 18:21:21.877835    1203 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955681877274174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:21 addons-647117 kubelet[1203]: E0829 18:21:21.877879    1203 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724955681877274174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575908,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:21:30 addons-647117 kubelet[1203]: I0829 18:21:30.219844    1203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-q67c7" podStartSLOduration=84.94427285 podStartE2EDuration="1m27.21982744s" podCreationTimestamp="2024-08-29 18:20:03 +0000 UTC" firstStartedPulling="2024-08-29 18:20:03.822966206 +0000 UTC m=+792.494354611" lastFinishedPulling="2024-08-29 18:20:06.098520797 +0000 UTC m=+794.769909201" observedRunningTime="2024-08-29 18:20:07.010224261 +0000 UTC m=+795.681612686" watchObservedRunningTime="2024-08-29 18:21:30.21982744 +0000 UTC m=+878.891215864"
	Aug 29 18:21:30 addons-647117 kubelet[1203]: E0829 18:21:30.434183    1203 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="df7618a9-c213-4e89-9b35-5a5530993d5a"
	Aug 29 18:21:31 addons-647117 kubelet[1203]: I0829 18:21:31.611466    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggprz\" (UniqueName: \"kubernetes.io/projected/3d5398d7-70c3-47b5-8cb8-da262a7c5736-kube-api-access-ggprz\") pod \"3d5398d7-70c3-47b5-8cb8-da262a7c5736\" (UID: \"3d5398d7-70c3-47b5-8cb8-da262a7c5736\") "
	Aug 29 18:21:31 addons-647117 kubelet[1203]: I0829 18:21:31.611565    1203 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3d5398d7-70c3-47b5-8cb8-da262a7c5736-tmp-dir\") pod \"3d5398d7-70c3-47b5-8cb8-da262a7c5736\" (UID: \"3d5398d7-70c3-47b5-8cb8-da262a7c5736\") "
	Aug 29 18:21:31 addons-647117 kubelet[1203]: I0829 18:21:31.613639    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d5398d7-70c3-47b5-8cb8-da262a7c5736-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3d5398d7-70c3-47b5-8cb8-da262a7c5736" (UID: "3d5398d7-70c3-47b5-8cb8-da262a7c5736"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 29 18:21:31 addons-647117 kubelet[1203]: I0829 18:21:31.621560    1203 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5398d7-70c3-47b5-8cb8-da262a7c5736-kube-api-access-ggprz" (OuterVolumeSpecName: "kube-api-access-ggprz") pod "3d5398d7-70c3-47b5-8cb8-da262a7c5736" (UID: "3d5398d7-70c3-47b5-8cb8-da262a7c5736"). InnerVolumeSpecName "kube-api-access-ggprz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:21:31 addons-647117 kubelet[1203]: I0829 18:21:31.712595    1203 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ggprz\" (UniqueName: \"kubernetes.io/projected/3d5398d7-70c3-47b5-8cb8-da262a7c5736-kube-api-access-ggprz\") on node \"addons-647117\" DevicePath \"\""
	Aug 29 18:21:31 addons-647117 kubelet[1203]: I0829 18:21:31.712629    1203 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3d5398d7-70c3-47b5-8cb8-da262a7c5736-tmp-dir\") on node \"addons-647117\" DevicePath \"\""
	
	
	==> storage-provisioner [c7d6293cd5ae54e43cb0c1bd4b5c8d422faf9f4530acd6e6470335c2ddd23747] <==
	I0829 18:07:03.102621       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:07:03.125054       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:07:03.125120       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:07:03.142183       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:07:03.142357       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-647117_782fd552-7659-45f7-a993-62776dcb3c7b!
	I0829 18:07:03.143256       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8a8c384d-e72d-41a0-bfd7-8f50bdcd533c", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-647117_782fd552-7659-45f7-a993-62776dcb3c7b became leader
	I0829 18:07:03.243000       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-647117_782fd552-7659-45f7-a993-62776dcb3c7b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-647117 -n addons-647117
helpers_test.go:261: (dbg) Run:  kubectl --context addons-647117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox metrics-server-8988944d9-9pvr6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-647117 describe pod busybox metrics-server-8988944d9-9pvr6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-647117 describe pod busybox metrics-server-8988944d9-9pvr6: exit status 1 (64.532437ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-647117/192.168.39.43
	Start Time:       Thu, 29 Aug 2024 18:08:26 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kj2nj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kj2nj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-647117
	  Normal   Pulling    11m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m57s (x42 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8988944d9-9pvr6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-647117 describe pod busybox metrics-server-8988944d9-9pvr6: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (303.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 node stop m02 -v=7 --alsologtostderr
E0829 18:30:30.609585   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:31:11.571914   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782425 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.467441576s)

                                                
                                                
-- stdout --
	* Stopping node "ha-782425-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:30:11.721134   35883 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:30:11.721390   35883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:30:11.721399   35883 out.go:358] Setting ErrFile to fd 2...
	I0829 18:30:11.721403   35883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:30:11.721628   35883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:30:11.721880   35883 mustload.go:65] Loading cluster: ha-782425
	I0829 18:30:11.722281   35883 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:30:11.722300   35883 stop.go:39] StopHost: ha-782425-m02
	I0829 18:30:11.722628   35883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:30:11.722669   35883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:30:11.739101   35883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42417
	I0829 18:30:11.739561   35883 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:30:11.740156   35883 main.go:141] libmachine: Using API Version  1
	I0829 18:30:11.740181   35883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:30:11.740534   35883 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:30:11.742751   35883 out.go:177] * Stopping node "ha-782425-m02"  ...
	I0829 18:30:11.744041   35883 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 18:30:11.744087   35883 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:30:11.744318   35883 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 18:30:11.744344   35883 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:30:11.747449   35883 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:30:11.747873   35883 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:30:11.747906   35883 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:30:11.748010   35883 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:30:11.748169   35883 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:30:11.748303   35883 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:30:11.748435   35883 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	I0829 18:30:11.832683   35883 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 18:30:11.886974   35883 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 18:30:11.942537   35883 main.go:141] libmachine: Stopping "ha-782425-m02"...
	I0829 18:30:11.942570   35883 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:30:11.944315   35883 main.go:141] libmachine: (ha-782425-m02) Calling .Stop
	I0829 18:30:11.947998   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 0/120
	I0829 18:30:12.949723   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 1/120
	I0829 18:30:13.951280   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 2/120
	I0829 18:30:14.952825   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 3/120
	I0829 18:30:15.954534   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 4/120
	I0829 18:30:16.956645   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 5/120
	I0829 18:30:17.958040   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 6/120
	I0829 18:30:18.960295   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 7/120
	I0829 18:30:19.961738   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 8/120
	I0829 18:30:20.963477   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 9/120
	I0829 18:30:21.965769   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 10/120
	I0829 18:30:22.967287   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 11/120
	I0829 18:30:23.968670   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 12/120
	I0829 18:30:24.970047   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 13/120
	I0829 18:30:25.971352   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 14/120
	I0829 18:30:26.973821   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 15/120
	I0829 18:30:27.975107   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 16/120
	I0829 18:30:28.976633   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 17/120
	I0829 18:30:29.978703   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 18/120
	I0829 18:30:30.980642   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 19/120
	I0829 18:30:31.982588   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 20/120
	I0829 18:30:32.984577   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 21/120
	I0829 18:30:33.986308   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 22/120
	I0829 18:30:34.987720   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 23/120
	I0829 18:30:35.989001   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 24/120
	I0829 18:30:36.990757   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 25/120
	I0829 18:30:37.992061   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 26/120
	I0829 18:30:38.993388   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 27/120
	I0829 18:30:39.995177   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 28/120
	I0829 18:30:40.996462   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 29/120
	I0829 18:30:41.998554   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 30/120
	I0829 18:30:42.999802   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 31/120
	I0829 18:30:44.000947   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 32/120
	I0829 18:30:45.002305   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 33/120
	I0829 18:30:46.004845   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 34/120
	I0829 18:30:47.006691   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 35/120
	I0829 18:30:48.008513   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 36/120
	I0829 18:30:49.009863   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 37/120
	I0829 18:30:50.011352   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 38/120
	I0829 18:30:51.012897   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 39/120
	I0829 18:30:52.014787   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 40/120
	I0829 18:30:53.016659   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 41/120
	I0829 18:30:54.018517   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 42/120
	I0829 18:30:55.019872   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 43/120
	I0829 18:30:56.021275   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 44/120
	I0829 18:30:57.022863   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 45/120
	I0829 18:30:58.024458   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 46/120
	I0829 18:30:59.025833   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 47/120
	I0829 18:31:00.027600   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 48/120
	I0829 18:31:01.029260   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 49/120
	I0829 18:31:02.031411   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 50/120
	I0829 18:31:03.032681   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 51/120
	I0829 18:31:04.034117   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 52/120
	I0829 18:31:05.036187   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 53/120
	I0829 18:31:06.037864   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 54/120
	I0829 18:31:07.039439   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 55/120
	I0829 18:31:08.041095   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 56/120
	I0829 18:31:09.042415   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 57/120
	I0829 18:31:10.044467   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 58/120
	I0829 18:31:11.047066   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 59/120
	I0829 18:31:12.048764   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 60/120
	I0829 18:31:13.050573   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 61/120
	I0829 18:31:14.052538   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 62/120
	I0829 18:31:15.053818   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 63/120
	I0829 18:31:16.055150   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 64/120
	I0829 18:31:17.057165   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 65/120
	I0829 18:31:18.058625   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 66/120
	I0829 18:31:19.060742   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 67/120
	I0829 18:31:20.062255   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 68/120
	I0829 18:31:21.063553   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 69/120
	I0829 18:31:22.065720   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 70/120
	I0829 18:31:23.067089   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 71/120
	I0829 18:31:24.068559   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 72/120
	I0829 18:31:25.070059   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 73/120
	I0829 18:31:26.071585   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 74/120
	I0829 18:31:27.073495   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 75/120
	I0829 18:31:28.074928   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 76/120
	I0829 18:31:29.076556   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 77/120
	I0829 18:31:30.078370   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 78/120
	I0829 18:31:31.080489   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 79/120
	I0829 18:31:32.082755   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 80/120
	I0829 18:31:33.084717   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 81/120
	I0829 18:31:34.086046   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 82/120
	I0829 18:31:35.087386   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 83/120
	I0829 18:31:36.088795   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 84/120
	I0829 18:31:37.090708   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 85/120
	I0829 18:31:38.092253   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 86/120
	I0829 18:31:39.093670   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 87/120
	I0829 18:31:40.095124   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 88/120
	I0829 18:31:41.096452   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 89/120
	I0829 18:31:42.098651   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 90/120
	I0829 18:31:43.099899   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 91/120
	I0829 18:31:44.101267   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 92/120
	I0829 18:31:45.102648   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 93/120
	I0829 18:31:46.104446   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 94/120
	I0829 18:31:47.106454   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 95/120
	I0829 18:31:48.107754   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 96/120
	I0829 18:31:49.109110   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 97/120
	I0829 18:31:50.110425   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 98/120
	I0829 18:31:51.112469   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 99/120
	I0829 18:31:52.114937   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 100/120
	I0829 18:31:53.116839   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 101/120
	I0829 18:31:54.118140   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 102/120
	I0829 18:31:55.119556   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 103/120
	I0829 18:31:56.120815   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 104/120
	I0829 18:31:57.122838   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 105/120
	I0829 18:31:58.124532   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 106/120
	I0829 18:31:59.126319   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 107/120
	I0829 18:32:00.127774   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 108/120
	I0829 18:32:01.129253   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 109/120
	I0829 18:32:02.131261   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 110/120
	I0829 18:32:03.132655   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 111/120
	I0829 18:32:04.134300   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 112/120
	I0829 18:32:05.136615   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 113/120
	I0829 18:32:06.138218   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 114/120
	I0829 18:32:07.139856   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 115/120
	I0829 18:32:08.141550   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 116/120
	I0829 18:32:09.143059   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 117/120
	I0829 18:32:10.144611   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 118/120
	I0829 18:32:11.146024   35883 main.go:141] libmachine: (ha-782425-m02) Waiting for machine to stop 119/120
	I0829 18:32:12.147414   35883 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0829 18:32:12.147568   35883 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-782425 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr: exit status 3 (19.141593316s)

                                                
                                                
-- stdout --
	ha-782425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-782425-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:32:12.190403   36308 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:32:12.190525   36308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:32:12.190537   36308 out.go:358] Setting ErrFile to fd 2...
	I0829 18:32:12.190543   36308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:32:12.190714   36308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:32:12.190869   36308 out.go:352] Setting JSON to false
	I0829 18:32:12.190889   36308 mustload.go:65] Loading cluster: ha-782425
	I0829 18:32:12.190938   36308 notify.go:220] Checking for updates...
	I0829 18:32:12.191238   36308 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:32:12.191251   36308 status.go:255] checking status of ha-782425 ...
	I0829 18:32:12.191627   36308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:12.191684   36308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:12.210114   36308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46471
	I0829 18:32:12.210610   36308 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:12.211210   36308 main.go:141] libmachine: Using API Version  1
	I0829 18:32:12.211233   36308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:12.211750   36308 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:12.211993   36308 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:32:12.213701   36308 status.go:330] ha-782425 host status = "Running" (err=<nil>)
	I0829 18:32:12.213718   36308 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:32:12.214009   36308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:12.214042   36308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:12.229020   36308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0829 18:32:12.229536   36308 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:12.230135   36308 main.go:141] libmachine: Using API Version  1
	I0829 18:32:12.230161   36308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:12.230492   36308 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:12.230727   36308 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:32:12.233243   36308 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:12.233685   36308 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:32:12.233721   36308 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:12.233832   36308 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:32:12.234139   36308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:12.234189   36308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:12.248809   36308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38205
	I0829 18:32:12.249244   36308 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:12.249745   36308 main.go:141] libmachine: Using API Version  1
	I0829 18:32:12.249764   36308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:12.250136   36308 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:12.250333   36308 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:32:12.250603   36308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:12.250635   36308 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:32:12.253397   36308 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:12.253754   36308 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:32:12.253783   36308 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:12.253907   36308 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:32:12.254112   36308 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:32:12.254262   36308 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:32:12.254399   36308 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:32:12.339436   36308 ssh_runner.go:195] Run: systemctl --version
	I0829 18:32:12.346484   36308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:12.363899   36308 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:32:12.363932   36308 api_server.go:166] Checking apiserver status ...
	I0829 18:32:12.363987   36308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:32:12.379121   36308 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup
	W0829 18:32:12.393902   36308 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:32:12.393971   36308 ssh_runner.go:195] Run: ls
	I0829 18:32:12.397986   36308 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:32:12.401904   36308 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:32:12.401927   36308 status.go:422] ha-782425 apiserver status = Running (err=<nil>)
	I0829 18:32:12.401939   36308 status.go:257] ha-782425 status: &{Name:ha-782425 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:32:12.401961   36308 status.go:255] checking status of ha-782425-m02 ...
	I0829 18:32:12.402260   36308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:12.402300   36308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:12.416903   36308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44929
	I0829 18:32:12.417301   36308 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:12.417767   36308 main.go:141] libmachine: Using API Version  1
	I0829 18:32:12.417784   36308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:12.418122   36308 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:12.418296   36308 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:32:12.419943   36308 status.go:330] ha-782425-m02 host status = "Running" (err=<nil>)
	I0829 18:32:12.419959   36308 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:32:12.420278   36308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:12.420311   36308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:12.434858   36308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
	I0829 18:32:12.435248   36308 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:12.435685   36308 main.go:141] libmachine: Using API Version  1
	I0829 18:32:12.435711   36308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:12.436010   36308 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:12.436164   36308 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:32:12.439451   36308 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:12.439909   36308 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:32:12.439935   36308 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:12.440033   36308 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:32:12.440370   36308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:12.440413   36308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:12.454885   36308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I0829 18:32:12.455299   36308 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:12.455740   36308 main.go:141] libmachine: Using API Version  1
	I0829 18:32:12.455759   36308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:12.456128   36308 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:12.456290   36308 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:32:12.456471   36308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:12.456495   36308 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:32:12.459723   36308 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:12.460131   36308 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:32:12.460155   36308 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:12.460291   36308 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:32:12.460466   36308 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:32:12.460726   36308 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:32:12.460857   36308 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	W0829 18:32:30.946288   36308 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.253:22: connect: no route to host
	W0829 18:32:30.946392   36308 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	E0829 18:32:30.946406   36308 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:30.946412   36308 status.go:257] ha-782425-m02 status: &{Name:ha-782425-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 18:32:30.946429   36308 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:30.946436   36308 status.go:255] checking status of ha-782425-m03 ...
	I0829 18:32:30.946735   36308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:30.946772   36308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:30.962200   36308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40035
	I0829 18:32:30.962628   36308 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:30.963047   36308 main.go:141] libmachine: Using API Version  1
	I0829 18:32:30.963067   36308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:30.963395   36308 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:30.963573   36308 main.go:141] libmachine: (ha-782425-m03) Calling .GetState
	I0829 18:32:30.965363   36308 status.go:330] ha-782425-m03 host status = "Running" (err=<nil>)
	I0829 18:32:30.965375   36308 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:32:30.965689   36308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:30.965726   36308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:30.980322   36308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I0829 18:32:30.980680   36308 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:30.981170   36308 main.go:141] libmachine: Using API Version  1
	I0829 18:32:30.981206   36308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:30.981549   36308 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:30.981733   36308 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:32:30.984383   36308 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:30.984764   36308 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:32:30.984783   36308 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:30.984929   36308 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:32:30.985250   36308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:30.985294   36308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:30.999498   36308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
	I0829 18:32:30.999942   36308 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:31.000463   36308 main.go:141] libmachine: Using API Version  1
	I0829 18:32:31.000485   36308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:31.000764   36308 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:31.000952   36308 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:32:31.001112   36308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:31.001127   36308 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:32:31.004189   36308 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:31.004571   36308 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:32:31.004594   36308 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:31.004737   36308 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:32:31.004899   36308 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:32:31.005107   36308 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:32:31.005249   36308 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:32:31.082228   36308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:31.098553   36308 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:32:31.098586   36308 api_server.go:166] Checking apiserver status ...
	I0829 18:32:31.098638   36308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:32:31.115634   36308 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0829 18:32:31.124911   36308 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:32:31.124972   36308 ssh_runner.go:195] Run: ls
	I0829 18:32:31.128893   36308 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:32:31.133375   36308 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:32:31.133405   36308 status.go:422] ha-782425-m03 apiserver status = Running (err=<nil>)
	I0829 18:32:31.133416   36308 status.go:257] ha-782425-m03 status: &{Name:ha-782425-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:32:31.133435   36308 status.go:255] checking status of ha-782425-m04 ...
	I0829 18:32:31.133834   36308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:31.133872   36308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:31.149777   36308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41781
	I0829 18:32:31.150281   36308 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:31.150820   36308 main.go:141] libmachine: Using API Version  1
	I0829 18:32:31.150849   36308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:31.151161   36308 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:31.151348   36308 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:32:31.152778   36308 status.go:330] ha-782425-m04 host status = "Running" (err=<nil>)
	I0829 18:32:31.152794   36308 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:32:31.153104   36308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:31.153144   36308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:31.167522   36308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
	I0829 18:32:31.167903   36308 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:31.168338   36308 main.go:141] libmachine: Using API Version  1
	I0829 18:32:31.168364   36308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:31.168659   36308 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:31.168856   36308 main.go:141] libmachine: (ha-782425-m04) Calling .GetIP
	I0829 18:32:31.171232   36308 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:31.171560   36308 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:32:31.171582   36308 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:31.171713   36308 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:32:31.171980   36308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:31.172015   36308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:31.186070   36308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43567
	I0829 18:32:31.186556   36308 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:31.187063   36308 main.go:141] libmachine: Using API Version  1
	I0829 18:32:31.187090   36308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:31.187381   36308 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:31.187569   36308 main.go:141] libmachine: (ha-782425-m04) Calling .DriverName
	I0829 18:32:31.187757   36308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:31.187777   36308 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHHostname
	I0829 18:32:31.190706   36308 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:31.191152   36308 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:32:31.191180   36308 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:31.191257   36308 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHPort
	I0829 18:32:31.191425   36308 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHKeyPath
	I0829 18:32:31.191565   36308 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHUsername
	I0829 18:32:31.191689   36308 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m04/id_rsa Username:docker}
	I0829 18:32:31.274366   36308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:31.289157   36308 status.go:257] ha-782425-m04 status: &{Name:ha-782425-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-782425 -n ha-782425
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-782425 logs -n 25: (1.336891979s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1158605446/001/cp-test_ha-782425-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425:/home/docker/cp-test_ha-782425-m03_ha-782425.txt                       |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425 sudo cat                                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m03_ha-782425.txt                                 |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m02:/home/docker/cp-test_ha-782425-m03_ha-782425-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m02 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m03_ha-782425-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04:/home/docker/cp-test_ha-782425-m03_ha-782425-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m04 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m03_ha-782425-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-782425 cp testdata/cp-test.txt                                                | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1158605446/001/cp-test_ha-782425-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425:/home/docker/cp-test_ha-782425-m04_ha-782425.txt                       |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425 sudo cat                                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m04_ha-782425.txt                                 |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m02:/home/docker/cp-test_ha-782425-m04_ha-782425-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m02 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m04_ha-782425-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03:/home/docker/cp-test_ha-782425-m04_ha-782425-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m03 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m04_ha-782425-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-782425 node stop m02 -v=7                                                     | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:25:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:25:37.867147   31894 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:25:37.867260   31894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:25:37.867269   31894 out.go:358] Setting ErrFile to fd 2...
	I0829 18:25:37.867277   31894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:25:37.867502   31894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:25:37.868071   31894 out.go:352] Setting JSON to false
	I0829 18:25:37.868905   31894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4085,"bootTime":1724951853,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:25:37.868962   31894 start.go:139] virtualization: kvm guest
	I0829 18:25:37.871126   31894 out.go:177] * [ha-782425] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:25:37.872509   31894 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:25:37.872500   31894 notify.go:220] Checking for updates...
	I0829 18:25:37.875147   31894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:25:37.876547   31894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:25:37.878107   31894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:25:37.879531   31894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:25:37.880985   31894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:25:37.882332   31894 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:25:37.917194   31894 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 18:25:37.918627   31894 start.go:297] selected driver: kvm2
	I0829 18:25:37.918643   31894 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:25:37.918658   31894 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:25:37.919635   31894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:25:37.919735   31894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:25:37.935215   31894 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:25:37.935265   31894 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:25:37.935474   31894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:25:37.935545   31894 cni.go:84] Creating CNI manager for ""
	I0829 18:25:37.935558   31894 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0829 18:25:37.935569   31894 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0829 18:25:37.935622   31894 start.go:340] cluster config:
	{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0829 18:25:37.935718   31894 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:25:37.937548   31894 out.go:177] * Starting "ha-782425" primary control-plane node in "ha-782425" cluster
	I0829 18:25:37.939035   31894 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:25:37.939074   31894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:25:37.939081   31894 cache.go:56] Caching tarball of preloaded images
	I0829 18:25:37.939168   31894 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:25:37.939182   31894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:25:37.939477   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:25:37.939502   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json: {Name:mkade95470e4316599e5e198e15c0eefeb7e120b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:25:37.939656   31894 start.go:360] acquireMachinesLock for ha-782425: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:25:37.939691   31894 start.go:364] duration metric: took 19.785µs to acquireMachinesLock for "ha-782425"
	I0829 18:25:37.939714   31894 start.go:93] Provisioning new machine with config: &{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:25:37.939768   31894 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 18:25:37.941384   31894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 18:25:37.941518   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:25:37.941565   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:25:37.956286   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38003
	I0829 18:25:37.956726   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:25:37.957245   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:25:37.957269   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:25:37.957718   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:25:37.957980   31894 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:25:37.958223   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:25:37.958368   31894 start.go:159] libmachine.API.Create for "ha-782425" (driver="kvm2")
	I0829 18:25:37.958398   31894 client.go:168] LocalClient.Create starting
	I0829 18:25:37.958429   31894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem
	I0829 18:25:37.958463   31894 main.go:141] libmachine: Decoding PEM data...
	I0829 18:25:37.958479   31894 main.go:141] libmachine: Parsing certificate...
	I0829 18:25:37.958536   31894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem
	I0829 18:25:37.958557   31894 main.go:141] libmachine: Decoding PEM data...
	I0829 18:25:37.958571   31894 main.go:141] libmachine: Parsing certificate...
	I0829 18:25:37.958586   31894 main.go:141] libmachine: Running pre-create checks...
	I0829 18:25:37.958598   31894 main.go:141] libmachine: (ha-782425) Calling .PreCreateCheck
	I0829 18:25:37.958967   31894 main.go:141] libmachine: (ha-782425) Calling .GetConfigRaw
	I0829 18:25:37.959311   31894 main.go:141] libmachine: Creating machine...
	I0829 18:25:37.959322   31894 main.go:141] libmachine: (ha-782425) Calling .Create
	I0829 18:25:37.959446   31894 main.go:141] libmachine: (ha-782425) Creating KVM machine...
	I0829 18:25:37.960839   31894 main.go:141] libmachine: (ha-782425) DBG | found existing default KVM network
	I0829 18:25:37.961520   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:37.961409   31917 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0829 18:25:37.961584   31894 main.go:141] libmachine: (ha-782425) DBG | created network xml: 
	I0829 18:25:37.961607   31894 main.go:141] libmachine: (ha-782425) DBG | <network>
	I0829 18:25:37.961636   31894 main.go:141] libmachine: (ha-782425) DBG |   <name>mk-ha-782425</name>
	I0829 18:25:37.961660   31894 main.go:141] libmachine: (ha-782425) DBG |   <dns enable='no'/>
	I0829 18:25:37.961680   31894 main.go:141] libmachine: (ha-782425) DBG |   
	I0829 18:25:37.961702   31894 main.go:141] libmachine: (ha-782425) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 18:25:37.961711   31894 main.go:141] libmachine: (ha-782425) DBG |     <dhcp>
	I0829 18:25:37.961719   31894 main.go:141] libmachine: (ha-782425) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 18:25:37.961731   31894 main.go:141] libmachine: (ha-782425) DBG |     </dhcp>
	I0829 18:25:37.961741   31894 main.go:141] libmachine: (ha-782425) DBG |   </ip>
	I0829 18:25:37.961746   31894 main.go:141] libmachine: (ha-782425) DBG |   
	I0829 18:25:37.961753   31894 main.go:141] libmachine: (ha-782425) DBG | </network>
	I0829 18:25:37.961771   31894 main.go:141] libmachine: (ha-782425) DBG | 
	I0829 18:25:37.967100   31894 main.go:141] libmachine: (ha-782425) DBG | trying to create private KVM network mk-ha-782425 192.168.39.0/24...
	I0829 18:25:38.030569   31894 main.go:141] libmachine: (ha-782425) Setting up store path in /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425 ...
	I0829 18:25:38.030600   31894 main.go:141] libmachine: (ha-782425) DBG | private KVM network mk-ha-782425 192.168.39.0/24 created
	I0829 18:25:38.030613   31894 main.go:141] libmachine: (ha-782425) Building disk image from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 18:25:38.030663   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:38.030518   31917 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:25:38.030698   31894 main.go:141] libmachine: (ha-782425) Downloading /home/jenkins/minikube-integration/19531-13056/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 18:25:38.292972   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:38.292825   31917 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa...
	I0829 18:25:38.429095   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:38.428945   31917 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/ha-782425.rawdisk...
	I0829 18:25:38.429117   31894 main.go:141] libmachine: (ha-782425) DBG | Writing magic tar header
	I0829 18:25:38.429154   31894 main.go:141] libmachine: (ha-782425) DBG | Writing SSH key tar header
	I0829 18:25:38.429201   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:38.429059   31917 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425 ...
	I0829 18:25:38.429233   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425
	I0829 18:25:38.429251   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines
	I0829 18:25:38.429261   31894 main.go:141] libmachine: (ha-782425) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425 (perms=drwx------)
	I0829 18:25:38.429269   31894 main.go:141] libmachine: (ha-782425) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:25:38.429276   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:25:38.429290   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056
	I0829 18:25:38.429301   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:25:38.429313   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:25:38.429324   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home
	I0829 18:25:38.429333   31894 main.go:141] libmachine: (ha-782425) DBG | Skipping /home - not owner
	I0829 18:25:38.429342   31894 main.go:141] libmachine: (ha-782425) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube (perms=drwxr-xr-x)
	I0829 18:25:38.429360   31894 main.go:141] libmachine: (ha-782425) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056 (perms=drwxrwxr-x)
	I0829 18:25:38.429382   31894 main.go:141] libmachine: (ha-782425) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:25:38.429394   31894 main.go:141] libmachine: (ha-782425) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:25:38.429402   31894 main.go:141] libmachine: (ha-782425) Creating domain...
	I0829 18:25:38.430469   31894 main.go:141] libmachine: (ha-782425) define libvirt domain using xml: 
	I0829 18:25:38.430485   31894 main.go:141] libmachine: (ha-782425) <domain type='kvm'>
	I0829 18:25:38.430495   31894 main.go:141] libmachine: (ha-782425)   <name>ha-782425</name>
	I0829 18:25:38.430503   31894 main.go:141] libmachine: (ha-782425)   <memory unit='MiB'>2200</memory>
	I0829 18:25:38.430512   31894 main.go:141] libmachine: (ha-782425)   <vcpu>2</vcpu>
	I0829 18:25:38.430518   31894 main.go:141] libmachine: (ha-782425)   <features>
	I0829 18:25:38.430526   31894 main.go:141] libmachine: (ha-782425)     <acpi/>
	I0829 18:25:38.430534   31894 main.go:141] libmachine: (ha-782425)     <apic/>
	I0829 18:25:38.430543   31894 main.go:141] libmachine: (ha-782425)     <pae/>
	I0829 18:25:38.430563   31894 main.go:141] libmachine: (ha-782425)     
	I0829 18:25:38.430569   31894 main.go:141] libmachine: (ha-782425)   </features>
	I0829 18:25:38.430575   31894 main.go:141] libmachine: (ha-782425)   <cpu mode='host-passthrough'>
	I0829 18:25:38.430580   31894 main.go:141] libmachine: (ha-782425)   
	I0829 18:25:38.430584   31894 main.go:141] libmachine: (ha-782425)   </cpu>
	I0829 18:25:38.430589   31894 main.go:141] libmachine: (ha-782425)   <os>
	I0829 18:25:38.430593   31894 main.go:141] libmachine: (ha-782425)     <type>hvm</type>
	I0829 18:25:38.430607   31894 main.go:141] libmachine: (ha-782425)     <boot dev='cdrom'/>
	I0829 18:25:38.430611   31894 main.go:141] libmachine: (ha-782425)     <boot dev='hd'/>
	I0829 18:25:38.430618   31894 main.go:141] libmachine: (ha-782425)     <bootmenu enable='no'/>
	I0829 18:25:38.430629   31894 main.go:141] libmachine: (ha-782425)   </os>
	I0829 18:25:38.430636   31894 main.go:141] libmachine: (ha-782425)   <devices>
	I0829 18:25:38.430642   31894 main.go:141] libmachine: (ha-782425)     <disk type='file' device='cdrom'>
	I0829 18:25:38.430651   31894 main.go:141] libmachine: (ha-782425)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/boot2docker.iso'/>
	I0829 18:25:38.430662   31894 main.go:141] libmachine: (ha-782425)       <target dev='hdc' bus='scsi'/>
	I0829 18:25:38.430686   31894 main.go:141] libmachine: (ha-782425)       <readonly/>
	I0829 18:25:38.430705   31894 main.go:141] libmachine: (ha-782425)     </disk>
	I0829 18:25:38.430720   31894 main.go:141] libmachine: (ha-782425)     <disk type='file' device='disk'>
	I0829 18:25:38.430736   31894 main.go:141] libmachine: (ha-782425)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:25:38.430771   31894 main.go:141] libmachine: (ha-782425)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/ha-782425.rawdisk'/>
	I0829 18:25:38.430784   31894 main.go:141] libmachine: (ha-782425)       <target dev='hda' bus='virtio'/>
	I0829 18:25:38.430793   31894 main.go:141] libmachine: (ha-782425)     </disk>
	I0829 18:25:38.430804   31894 main.go:141] libmachine: (ha-782425)     <interface type='network'>
	I0829 18:25:38.430835   31894 main.go:141] libmachine: (ha-782425)       <source network='mk-ha-782425'/>
	I0829 18:25:38.430856   31894 main.go:141] libmachine: (ha-782425)       <model type='virtio'/>
	I0829 18:25:38.430870   31894 main.go:141] libmachine: (ha-782425)     </interface>
	I0829 18:25:38.430884   31894 main.go:141] libmachine: (ha-782425)     <interface type='network'>
	I0829 18:25:38.430903   31894 main.go:141] libmachine: (ha-782425)       <source network='default'/>
	I0829 18:25:38.430921   31894 main.go:141] libmachine: (ha-782425)       <model type='virtio'/>
	I0829 18:25:38.430934   31894 main.go:141] libmachine: (ha-782425)     </interface>
	I0829 18:25:38.430944   31894 main.go:141] libmachine: (ha-782425)     <serial type='pty'>
	I0829 18:25:38.430955   31894 main.go:141] libmachine: (ha-782425)       <target port='0'/>
	I0829 18:25:38.430965   31894 main.go:141] libmachine: (ha-782425)     </serial>
	I0829 18:25:38.430976   31894 main.go:141] libmachine: (ha-782425)     <console type='pty'>
	I0829 18:25:38.430985   31894 main.go:141] libmachine: (ha-782425)       <target type='serial' port='0'/>
	I0829 18:25:38.431010   31894 main.go:141] libmachine: (ha-782425)     </console>
	I0829 18:25:38.431027   31894 main.go:141] libmachine: (ha-782425)     <rng model='virtio'>
	I0829 18:25:38.431039   31894 main.go:141] libmachine: (ha-782425)       <backend model='random'>/dev/random</backend>
	I0829 18:25:38.431048   31894 main.go:141] libmachine: (ha-782425)     </rng>
	I0829 18:25:38.431058   31894 main.go:141] libmachine: (ha-782425)     
	I0829 18:25:38.431067   31894 main.go:141] libmachine: (ha-782425)     
	I0829 18:25:38.431077   31894 main.go:141] libmachine: (ha-782425)   </devices>
	I0829 18:25:38.431086   31894 main.go:141] libmachine: (ha-782425) </domain>
	I0829 18:25:38.431110   31894 main.go:141] libmachine: (ha-782425) 
	I0829 18:25:38.435249   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:47:48:9b in network default
	I0829 18:25:38.435805   31894 main.go:141] libmachine: (ha-782425) Ensuring networks are active...
	I0829 18:25:38.435822   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:38.436526   31894 main.go:141] libmachine: (ha-782425) Ensuring network default is active
	I0829 18:25:38.436895   31894 main.go:141] libmachine: (ha-782425) Ensuring network mk-ha-782425 is active
	I0829 18:25:38.437417   31894 main.go:141] libmachine: (ha-782425) Getting domain xml...
	I0829 18:25:38.438296   31894 main.go:141] libmachine: (ha-782425) Creating domain...
	I0829 18:25:39.612755   31894 main.go:141] libmachine: (ha-782425) Waiting to get IP...
	I0829 18:25:39.613519   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:39.613932   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:39.613969   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:39.613908   31917 retry.go:31] will retry after 252.54956ms: waiting for machine to come up
	I0829 18:25:39.868393   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:39.868798   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:39.868825   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:39.868768   31917 retry.go:31] will retry after 318.299028ms: waiting for machine to come up
	I0829 18:25:40.188369   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:40.188837   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:40.188860   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:40.188786   31917 retry.go:31] will retry after 363.788273ms: waiting for machine to come up
	I0829 18:25:40.554528   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:40.554973   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:40.555001   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:40.554931   31917 retry.go:31] will retry after 455.656451ms: waiting for machine to come up
	I0829 18:25:41.012838   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:41.013254   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:41.013285   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:41.013209   31917 retry.go:31] will retry after 583.854313ms: waiting for machine to come up
	I0829 18:25:41.600776   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:41.601286   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:41.601323   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:41.601203   31917 retry.go:31] will retry after 720.267915ms: waiting for machine to come up
	I0829 18:25:42.323178   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:42.323693   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:42.323734   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:42.323624   31917 retry.go:31] will retry after 989.211909ms: waiting for machine to come up
	I0829 18:25:43.314724   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:43.315093   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:43.315119   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:43.315058   31917 retry.go:31] will retry after 1.144448467s: waiting for machine to come up
	I0829 18:25:44.461273   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:44.461690   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:44.461709   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:44.461657   31917 retry.go:31] will retry after 1.158642835s: waiting for machine to come up
	I0829 18:25:45.621905   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:45.622358   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:45.622391   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:45.622320   31917 retry.go:31] will retry after 1.998708112s: waiting for machine to come up
	I0829 18:25:47.622185   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:47.622780   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:47.622811   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:47.622722   31917 retry.go:31] will retry after 2.004091072s: waiting for machine to come up
	I0829 18:25:49.628964   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:49.629575   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:49.629605   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:49.629546   31917 retry.go:31] will retry after 2.529906337s: waiting for machine to come up
	I0829 18:25:52.160611   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:52.160895   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:52.160912   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:52.160852   31917 retry.go:31] will retry after 3.940258303s: waiting for machine to come up
	I0829 18:25:56.104431   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:56.104936   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:56.104960   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:56.104888   31917 retry.go:31] will retry after 4.177118538s: waiting for machine to come up
	I0829 18:26:00.285123   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.285741   31894 main.go:141] libmachine: (ha-782425) Found IP for machine: 192.168.39.39
	I0829 18:26:00.285766   31894 main.go:141] libmachine: (ha-782425) Reserving static IP address...
	I0829 18:26:00.285780   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has current primary IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.286236   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find host DHCP lease matching {name: "ha-782425", mac: "52:54:00:4e:37:dc", ip: "192.168.39.39"} in network mk-ha-782425
	I0829 18:26:00.355403   31894 main.go:141] libmachine: (ha-782425) DBG | Getting to WaitForSSH function...
	I0829 18:26:00.355449   31894 main.go:141] libmachine: (ha-782425) Reserved static IP address: 192.168.39.39
	I0829 18:26:00.355463   31894 main.go:141] libmachine: (ha-782425) Waiting for SSH to be available...
	I0829 18:26:00.357630   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.358018   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.358048   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.358192   31894 main.go:141] libmachine: (ha-782425) DBG | Using SSH client type: external
	I0829 18:26:00.358218   31894 main.go:141] libmachine: (ha-782425) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa (-rw-------)
	I0829 18:26:00.358247   31894 main.go:141] libmachine: (ha-782425) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:26:00.358255   31894 main.go:141] libmachine: (ha-782425) DBG | About to run SSH command:
	I0829 18:26:00.358268   31894 main.go:141] libmachine: (ha-782425) DBG | exit 0
	I0829 18:26:00.482401   31894 main.go:141] libmachine: (ha-782425) DBG | SSH cmd err, output: <nil>: 
	I0829 18:26:00.482690   31894 main.go:141] libmachine: (ha-782425) KVM machine creation complete!
	I0829 18:26:00.482969   31894 main.go:141] libmachine: (ha-782425) Calling .GetConfigRaw
	I0829 18:26:00.483536   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:00.483778   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:00.483936   31894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:26:00.483954   31894 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:26:00.485260   31894 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:26:00.485278   31894 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:26:00.485285   31894 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:26:00.485291   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:00.488046   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.488395   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.488429   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.488606   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:00.488780   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.488949   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.489085   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:00.489274   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:00.489560   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:26:00.489578   31894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:26:00.597339   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:26:00.597364   31894 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:26:00.597377   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:00.599767   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.600124   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.600160   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.600321   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:00.600521   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.600663   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.600777   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:00.600956   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:00.601126   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:26:00.601136   31894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:26:00.710649   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:26:00.710712   31894 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:26:00.710721   31894 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:26:00.710728   31894 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:26:00.710947   31894 buildroot.go:166] provisioning hostname "ha-782425"
	I0829 18:26:00.710971   31894 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:26:00.711148   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:00.713696   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.714073   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.714112   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.714296   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:00.714511   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.714635   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.714753   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:00.714909   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:00.715079   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:26:00.715092   31894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-782425 && echo "ha-782425" | sudo tee /etc/hostname
	I0829 18:26:00.836970   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-782425
	
	I0829 18:26:00.836997   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:00.839997   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.840367   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.840400   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.840531   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:00.840729   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.840872   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.841037   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:00.841202   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:00.841416   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:26:00.841439   31894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-782425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-782425/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-782425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:26:00.958497   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:26:00.958521   31894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:26:00.958538   31894 buildroot.go:174] setting up certificates
	I0829 18:26:00.958547   31894 provision.go:84] configureAuth start
	I0829 18:26:00.958555   31894 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:26:00.958866   31894 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:26:00.961597   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.961805   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.961838   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.961942   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:00.963894   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.964151   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.964173   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.964277   31894 provision.go:143] copyHostCerts
	I0829 18:26:00.964308   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:26:00.964351   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 18:26:00.964366   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:26:00.964470   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:26:00.964554   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:26:00.964575   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 18:26:00.964582   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:26:00.964616   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:26:00.964664   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:26:00.964680   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 18:26:00.964686   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:26:00.964708   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:26:00.964750   31894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.ha-782425 san=[127.0.0.1 192.168.39.39 ha-782425 localhost minikube]
	I0829 18:26:01.079246   31894 provision.go:177] copyRemoteCerts
	I0829 18:26:01.079300   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:26:01.079331   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:01.081792   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.082106   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.082137   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.082301   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:01.082509   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.082691   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:01.082835   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:01.167913   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 18:26:01.167997   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:26:01.191043   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 18:26:01.191129   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0829 18:26:01.212920   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 18:26:01.212985   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:26:01.234244   31894 provision.go:87] duration metric: took 275.684593ms to configureAuth
	I0829 18:26:01.234275   31894 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:26:01.234479   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:26:01.234567   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:01.237125   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.237462   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.237489   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.237630   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:01.237817   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.237969   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.238110   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:01.238249   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:01.238407   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:26:01.238428   31894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:26:01.455620   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:26:01.455656   31894 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:26:01.455669   31894 main.go:141] libmachine: (ha-782425) Calling .GetURL
	I0829 18:26:01.456811   31894 main.go:141] libmachine: (ha-782425) DBG | Using libvirt version 6000000
	I0829 18:26:01.458787   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.459127   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.459168   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.459267   31894 main.go:141] libmachine: Docker is up and running!
	I0829 18:26:01.459279   31894 main.go:141] libmachine: Reticulating splines...
	I0829 18:26:01.459287   31894 client.go:171] duration metric: took 23.500881314s to LocalClient.Create
	I0829 18:26:01.459310   31894 start.go:167] duration metric: took 23.500942151s to libmachine.API.Create "ha-782425"
	I0829 18:26:01.459322   31894 start.go:293] postStartSetup for "ha-782425" (driver="kvm2")
	I0829 18:26:01.459334   31894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:26:01.459367   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:01.459573   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:26:01.459592   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:01.461877   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.462212   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.462240   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.462383   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:01.462557   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.462739   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:01.462879   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:01.544073   31894 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:26:01.548167   31894 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:26:01.548200   31894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 18:26:01.548274   31894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 18:26:01.548369   31894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 18:26:01.548381   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /etc/ssl/certs/202592.pem
	I0829 18:26:01.548478   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 18:26:01.557256   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:26:01.580305   31894 start.go:296] duration metric: took 120.971682ms for postStartSetup
	I0829 18:26:01.580348   31894 main.go:141] libmachine: (ha-782425) Calling .GetConfigRaw
	I0829 18:26:01.581010   31894 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:26:01.583449   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.583718   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.583746   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.583986   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:26:01.584164   31894 start.go:128] duration metric: took 23.644387848s to createHost
	I0829 18:26:01.584186   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:01.586374   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.586698   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.586716   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.586871   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:01.587039   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.587184   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.587318   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:01.587436   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:01.587606   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:26:01.587633   31894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:26:01.694987   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724955961.672763996
	
	I0829 18:26:01.695008   31894 fix.go:216] guest clock: 1724955961.672763996
	I0829 18:26:01.695015   31894 fix.go:229] Guest: 2024-08-29 18:26:01.672763996 +0000 UTC Remote: 2024-08-29 18:26:01.584176103 +0000 UTC m=+23.752171628 (delta=88.587893ms)
	I0829 18:26:01.695034   31894 fix.go:200] guest clock delta is within tolerance: 88.587893ms
	I0829 18:26:01.695040   31894 start.go:83] releasing machines lock for "ha-782425", held for 23.755337443s
	I0829 18:26:01.695060   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:01.695287   31894 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:26:01.697859   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.698352   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.698387   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.698459   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:01.698952   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:01.699131   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:01.699237   31894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:26:01.699273   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:01.699355   31894 ssh_runner.go:195] Run: cat /version.json
	I0829 18:26:01.699380   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:01.702040   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.702401   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.702441   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.702462   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.702696   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:01.702899   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.702950   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.702975   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.703075   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:01.703102   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:01.703245   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:01.703261   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.703470   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:01.703601   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:01.782897   31894 ssh_runner.go:195] Run: systemctl --version
	I0829 18:26:01.815514   31894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:26:01.970702   31894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:26:01.976178   31894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:26:01.976233   31894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:26:01.992238   31894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:26:01.992258   31894 start.go:495] detecting cgroup driver to use...
	I0829 18:26:01.992312   31894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:26:02.008342   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:26:02.021835   31894 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:26:02.021905   31894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:26:02.035185   31894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:26:02.048429   31894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:26:02.156392   31894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:26:02.294402   31894 docker.go:233] disabling docker service ...
	I0829 18:26:02.294462   31894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:26:02.308389   31894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:26:02.320832   31894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:26:02.459717   31894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:26:02.580176   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:26:02.595527   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:26:02.613403   31894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:26:02.613464   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.623157   31894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:26:02.623243   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.632952   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.642439   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.652287   31894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:26:02.662209   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.672069   31894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.688368   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.698250   31894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:26:02.707460   31894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 18:26:02.707504   31894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 18:26:02.720479   31894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:26:02.729874   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:26:02.852411   31894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:26:02.938754   31894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:26:02.938815   31894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:26:02.943380   31894 start.go:563] Will wait 60s for crictl version
	I0829 18:26:02.943425   31894 ssh_runner.go:195] Run: which crictl
	I0829 18:26:02.946880   31894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:26:02.984261   31894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:26:02.984338   31894 ssh_runner.go:195] Run: crio --version
	I0829 18:26:03.010616   31894 ssh_runner.go:195] Run: crio --version
	I0829 18:26:03.039162   31894 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:26:03.040233   31894 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:26:03.043179   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:03.043479   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:03.043495   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:03.043704   31894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:26:03.047399   31894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:26:03.059113   31894 kubeadm.go:883] updating cluster {Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:26:03.059203   31894 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:26:03.059244   31894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:26:03.087934   31894 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 18:26:03.087991   31894 ssh_runner.go:195] Run: which lz4
	I0829 18:26:03.091491   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0829 18:26:03.091573   31894 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 18:26:03.095120   31894 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 18:26:03.095146   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 18:26:04.293568   31894 crio.go:462] duration metric: took 1.202015488s to copy over tarball
	I0829 18:26:04.293653   31894 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 18:26:06.284728   31894 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.991042195s)
	I0829 18:26:06.284762   31894 crio.go:469] duration metric: took 1.991160188s to extract the tarball
	I0829 18:26:06.284772   31894 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 18:26:06.320353   31894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:26:06.363216   31894 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:26:06.363244   31894 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:26:06.363255   31894 kubeadm.go:934] updating node { 192.168.39.39 8443 v1.31.0 crio true true} ...
	I0829 18:26:06.363371   31894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-782425 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:26:06.363438   31894 ssh_runner.go:195] Run: crio config
	I0829 18:26:06.406168   31894 cni.go:84] Creating CNI manager for ""
	I0829 18:26:06.406186   31894 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0829 18:26:06.406198   31894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:26:06.406219   31894 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.39 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-782425 NodeName:ha-782425 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:26:06.406378   31894 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-782425"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:26:06.406401   31894 kube-vip.go:115] generating kube-vip config ...
	I0829 18:26:06.406463   31894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 18:26:06.424445   31894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 18:26:06.424554   31894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0829 18:26:06.424617   31894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:26:06.434031   31894 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:26:06.434123   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0829 18:26:06.442976   31894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0829 18:26:06.458034   31894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:26:06.473075   31894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0829 18:26:06.488549   31894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0829 18:26:06.503336   31894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 18:26:06.506900   31894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:26:06.517900   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:26:06.640996   31894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:26:06.657546   31894 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425 for IP: 192.168.39.39
	I0829 18:26:06.657574   31894 certs.go:194] generating shared ca certs ...
	I0829 18:26:06.657594   31894 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:06.657779   31894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 18:26:06.657829   31894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 18:26:06.657843   31894 certs.go:256] generating profile certs ...
	I0829 18:26:06.657908   31894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key
	I0829 18:26:06.657926   31894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.crt with IP's: []
	I0829 18:26:06.833897   31894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.crt ...
	I0829 18:26:06.833920   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.crt: {Name:mk803862989d3014c3f0f9b504b3f02d49baada0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:06.834075   31894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key ...
	I0829 18:26:06.834084   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key: {Name:mk7300df711cd15668d6488958571b6b4b07bc70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:06.834174   31894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.ed268bd7
	I0829 18:26:06.834189   31894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.ed268bd7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.39 192.168.39.254]
	I0829 18:26:07.101989   31894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.ed268bd7 ...
	I0829 18:26:07.102023   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.ed268bd7: {Name:mk00951deaf96cd75f54dbd1e69bfc47cc7fc9fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:07.102207   31894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.ed268bd7 ...
	I0829 18:26:07.102224   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.ed268bd7: {Name:mk268bf097f2f487c3ef925c05ee57a582c2559a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:07.102294   31894 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.ed268bd7 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt
	I0829 18:26:07.102389   31894 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.ed268bd7 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key
	I0829 18:26:07.102443   31894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key
	I0829 18:26:07.102461   31894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt with IP's: []
	I0829 18:26:07.181496   31894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt ...
	I0829 18:26:07.181527   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt: {Name:mk24182090946f9eb12d50db2a2a78f43a4dcb2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:07.181673   31894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key ...
	I0829 18:26:07.181691   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key: {Name:mk68143175544f4e4e481f32b6e72cda322b8ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:07.181760   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 18:26:07.181776   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 18:26:07.181787   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 18:26:07.181798   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 18:26:07.181808   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 18:26:07.181818   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 18:26:07.181828   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 18:26:07.181840   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 18:26:07.181906   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 18:26:07.181940   31894 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 18:26:07.181949   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:26:07.181971   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:26:07.182008   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:26:07.182034   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 18:26:07.182080   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:26:07.182135   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /usr/share/ca-certificates/202592.pem
	I0829 18:26:07.182155   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:07.182168   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem -> /usr/share/ca-certificates/20259.pem
	I0829 18:26:07.182765   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:26:07.207474   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:26:07.230113   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:26:07.252672   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:26:07.275435   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 18:26:07.297843   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:26:07.321190   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:26:07.344146   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:26:07.366687   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 18:26:07.388171   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:26:07.412637   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 18:26:07.445470   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:26:07.463183   31894 ssh_runner.go:195] Run: openssl version
	I0829 18:26:07.469735   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 18:26:07.480017   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 18:26:07.484182   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 18:26:07.484241   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 18:26:07.489548   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 18:26:07.499332   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:26:07.508783   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:07.512801   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:07.512857   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:07.517956   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:26:07.527522   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 18:26:07.537444   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 18:26:07.541397   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 18:26:07.541458   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 18:26:07.546721   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 18:26:07.556751   31894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:26:07.560526   31894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:26:07.560589   31894 kubeadm.go:392] StartCluster: {Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:26:07.560682   31894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:26:07.560723   31894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:26:07.597019   31894 cri.go:89] found id: ""
	I0829 18:26:07.597103   31894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:26:07.606350   31894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:26:07.614722   31894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:26:07.622807   31894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:26:07.622826   31894 kubeadm.go:157] found existing configuration files:
	
	I0829 18:26:07.622875   31894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:26:07.630502   31894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:26:07.630544   31894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:26:07.638605   31894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:26:07.646170   31894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:26:07.646238   31894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:26:07.654851   31894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:26:07.662865   31894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:26:07.662908   31894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:26:07.671205   31894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:26:07.678975   31894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:26:07.679023   31894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:26:07.687174   31894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 18:26:07.783868   31894 kubeadm.go:310] W0829 18:26:07.767739     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:26:07.784465   31894 kubeadm.go:310] W0829 18:26:07.768529     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:26:07.878060   31894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:26:22.425502   31894 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:26:22.425613   31894 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:26:22.425713   31894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:26:22.425846   31894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:26:22.425968   31894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:26:22.426044   31894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:26:22.427617   31894 out.go:235]   - Generating certificates and keys ...
	I0829 18:26:22.427712   31894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:26:22.427808   31894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:26:22.427918   31894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:26:22.427987   31894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:26:22.428070   31894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:26:22.428141   31894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:26:22.428218   31894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:26:22.428391   31894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-782425 localhost] and IPs [192.168.39.39 127.0.0.1 ::1]
	I0829 18:26:22.428472   31894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:26:22.428606   31894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-782425 localhost] and IPs [192.168.39.39 127.0.0.1 ::1]
	I0829 18:26:22.428714   31894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:26:22.428813   31894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:26:22.428877   31894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:26:22.428959   31894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:26:22.429032   31894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:26:22.429113   31894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:26:22.429194   31894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:26:22.429280   31894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:26:22.429331   31894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:26:22.429411   31894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:26:22.429473   31894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:26:22.432035   31894 out.go:235]   - Booting up control plane ...
	I0829 18:26:22.432159   31894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:26:22.432261   31894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:26:22.432370   31894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:26:22.432499   31894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:26:22.432608   31894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:26:22.432652   31894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:26:22.432768   31894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:26:22.432865   31894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:26:22.432920   31894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001203498s
	I0829 18:26:22.432975   31894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:26:22.433020   31894 kubeadm.go:310] [api-check] The API server is healthy after 8.980651426s
	I0829 18:26:22.433105   31894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:26:22.433216   31894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:26:22.433291   31894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:26:22.433463   31894 kubeadm.go:310] [mark-control-plane] Marking the node ha-782425 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:26:22.433524   31894 kubeadm.go:310] [bootstrap-token] Using token: hmug4n.uc0tr7mprzanzx0o
	I0829 18:26:22.434804   31894 out.go:235]   - Configuring RBAC rules ...
	I0829 18:26:22.434891   31894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:26:22.434959   31894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:26:22.435087   31894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:26:22.435209   31894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:26:22.435319   31894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:26:22.435429   31894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:26:22.435527   31894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:26:22.435600   31894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:26:22.435671   31894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:26:22.435680   31894 kubeadm.go:310] 
	I0829 18:26:22.435763   31894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:26:22.435771   31894 kubeadm.go:310] 
	I0829 18:26:22.435847   31894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:26:22.435853   31894 kubeadm.go:310] 
	I0829 18:26:22.435874   31894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:26:22.435927   31894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:26:22.435978   31894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:26:22.435985   31894 kubeadm.go:310] 
	I0829 18:26:22.436043   31894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:26:22.436054   31894 kubeadm.go:310] 
	I0829 18:26:22.436089   31894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:26:22.436095   31894 kubeadm.go:310] 
	I0829 18:26:22.436134   31894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:26:22.436226   31894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:26:22.436328   31894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:26:22.436337   31894 kubeadm.go:310] 
	I0829 18:26:22.436444   31894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:26:22.436543   31894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:26:22.436555   31894 kubeadm.go:310] 
	I0829 18:26:22.436656   31894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hmug4n.uc0tr7mprzanzx0o \
	I0829 18:26:22.436752   31894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 18:26:22.436774   31894 kubeadm.go:310] 	--control-plane 
	I0829 18:26:22.436778   31894 kubeadm.go:310] 
	I0829 18:26:22.436845   31894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:26:22.436852   31894 kubeadm.go:310] 
	I0829 18:26:22.436922   31894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hmug4n.uc0tr7mprzanzx0o \
	I0829 18:26:22.437070   31894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 18:26:22.437086   31894 cni.go:84] Creating CNI manager for ""
	I0829 18:26:22.437093   31894 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0829 18:26:22.439028   31894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0829 18:26:22.440208   31894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0829 18:26:22.445542   31894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0829 18:26:22.445562   31894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0829 18:26:22.463693   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0829 18:26:22.849317   31894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:26:22.849415   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:26:22.849433   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-782425 minikube.k8s.io/updated_at=2024_08_29T18_26_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=ha-782425 minikube.k8s.io/primary=true
	I0829 18:26:22.895718   31894 ops.go:34] apiserver oom_adj: -16
	I0829 18:26:23.041273   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:26:23.541812   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:26:23.620308   31894 kubeadm.go:1113] duration metric: took 770.957594ms to wait for elevateKubeSystemPrivileges
	I0829 18:26:23.620352   31894 kubeadm.go:394] duration metric: took 16.059767851s to StartCluster
	I0829 18:26:23.620375   31894 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:23.620445   31894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:26:23.621113   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:23.621311   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:26:23.621318   31894 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:26:23.621334   31894 start.go:241] waiting for startup goroutines ...
	I0829 18:26:23.621341   31894 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 18:26:23.621382   31894 addons.go:69] Setting storage-provisioner=true in profile "ha-782425"
	I0829 18:26:23.621395   31894 addons.go:69] Setting default-storageclass=true in profile "ha-782425"
	I0829 18:26:23.621407   31894 addons.go:234] Setting addon storage-provisioner=true in "ha-782425"
	I0829 18:26:23.621427   31894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-782425"
	I0829 18:26:23.621430   31894 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:26:23.621518   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:26:23.621786   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:23.621817   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:23.621823   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:23.621850   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:23.636750   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0829 18:26:23.637235   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:23.637848   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:23.637883   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:23.637894   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
	I0829 18:26:23.638249   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:23.638298   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:23.638700   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:23.638723   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:23.638781   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:23.638805   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:23.639198   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:23.639403   31894 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:26:23.641586   31894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:26:23.641814   31894 kapi.go:59] client config for ha-782425: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.crt", KeyFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key", CAFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0829 18:26:23.642275   31894 cert_rotation.go:140] Starting client certificate rotation controller
	I0829 18:26:23.642432   31894 addons.go:234] Setting addon default-storageclass=true in "ha-782425"
	I0829 18:26:23.642470   31894 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:26:23.642730   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:23.642757   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:23.654166   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37559
	I0829 18:26:23.654595   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:23.655144   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:23.655166   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:23.655538   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:23.655731   31894 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:26:23.657530   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:23.657995   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34377
	I0829 18:26:23.658434   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:23.658926   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:23.658941   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:23.659283   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:23.659764   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:23.659817   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:23.659878   31894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:26:23.661072   31894 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:26:23.661086   31894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:26:23.661098   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:23.664372   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:23.664817   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:23.664883   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:23.665064   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:23.665249   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:23.665397   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:23.665524   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:23.675129   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33707
	I0829 18:26:23.675526   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:23.675939   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:23.675958   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:23.676286   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:23.676486   31894 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:26:23.678105   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:23.678309   31894 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:26:23.678328   31894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:26:23.678347   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:23.681147   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:23.681657   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:23.681687   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:23.681817   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:23.682006   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:23.682189   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:23.682327   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:23.781785   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:26:23.867153   31894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:26:23.874728   31894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:26:24.296714   31894 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0829 18:26:24.534402   31894 main.go:141] libmachine: Making call to close driver server
	I0829 18:26:24.534430   31894 main.go:141] libmachine: (ha-782425) Calling .Close
	I0829 18:26:24.534501   31894 main.go:141] libmachine: Making call to close driver server
	I0829 18:26:24.534533   31894 main.go:141] libmachine: (ha-782425) Calling .Close
	I0829 18:26:24.534760   31894 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:26:24.534774   31894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:26:24.534782   31894 main.go:141] libmachine: Making call to close driver server
	I0829 18:26:24.534788   31894 main.go:141] libmachine: (ha-782425) Calling .Close
	I0829 18:26:24.534899   31894 main.go:141] libmachine: (ha-782425) DBG | Closing plugin on server side
	I0829 18:26:24.534903   31894 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:26:24.534918   31894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:26:24.534949   31894 main.go:141] libmachine: Making call to close driver server
	I0829 18:26:24.534961   31894 main.go:141] libmachine: (ha-782425) Calling .Close
	I0829 18:26:24.535046   31894 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:26:24.535058   31894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:26:24.536174   31894 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:26:24.536185   31894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:26:24.536252   31894 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0829 18:26:24.536266   31894 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0829 18:26:24.536352   31894 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0829 18:26:24.536361   31894 round_trippers.go:469] Request Headers:
	I0829 18:26:24.536371   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:26:24.536375   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:26:24.550657   31894 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0829 18:26:24.551482   31894 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0829 18:26:24.551502   31894 round_trippers.go:469] Request Headers:
	I0829 18:26:24.551519   31894 round_trippers.go:473]     Content-Type: application/json
	I0829 18:26:24.551530   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:26:24.551536   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:26:24.554968   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:26:24.555329   31894 main.go:141] libmachine: Making call to close driver server
	I0829 18:26:24.555349   31894 main.go:141] libmachine: (ha-782425) Calling .Close
	I0829 18:26:24.555615   31894 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:26:24.555646   31894 main.go:141] libmachine: (ha-782425) DBG | Closing plugin on server side
	I0829 18:26:24.555663   31894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:26:24.557267   31894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0829 18:26:24.558409   31894 addons.go:510] duration metric: took 937.060796ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0829 18:26:24.558452   31894 start.go:246] waiting for cluster config update ...
	I0829 18:26:24.558467   31894 start.go:255] writing updated cluster config ...
	I0829 18:26:24.559795   31894 out.go:201] 
	I0829 18:26:24.560958   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:26:24.561021   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:26:24.562366   31894 out.go:177] * Starting "ha-782425-m02" control-plane node in "ha-782425" cluster
	I0829 18:26:24.563288   31894 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:26:24.563317   31894 cache.go:56] Caching tarball of preloaded images
	I0829 18:26:24.563443   31894 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:26:24.563460   31894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:26:24.563556   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:26:24.563792   31894 start.go:360] acquireMachinesLock for ha-782425-m02: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:26:24.563849   31894 start.go:364] duration metric: took 30.889µs to acquireMachinesLock for "ha-782425-m02"
	I0829 18:26:24.563873   31894 start.go:93] Provisioning new machine with config: &{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:26:24.563984   31894 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0829 18:26:24.565373   31894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 18:26:24.565468   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:24.565499   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:24.579868   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36615
	I0829 18:26:24.580329   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:24.580779   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:24.580794   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:24.581121   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:24.581320   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetMachineName
	I0829 18:26:24.581467   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:24.581662   31894 start.go:159] libmachine.API.Create for "ha-782425" (driver="kvm2")
	I0829 18:26:24.581687   31894 client.go:168] LocalClient.Create starting
	I0829 18:26:24.581726   31894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem
	I0829 18:26:24.581767   31894 main.go:141] libmachine: Decoding PEM data...
	I0829 18:26:24.581790   31894 main.go:141] libmachine: Parsing certificate...
	I0829 18:26:24.581870   31894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem
	I0829 18:26:24.581897   31894 main.go:141] libmachine: Decoding PEM data...
	I0829 18:26:24.581917   31894 main.go:141] libmachine: Parsing certificate...
	I0829 18:26:24.581938   31894 main.go:141] libmachine: Running pre-create checks...
	I0829 18:26:24.581950   31894 main.go:141] libmachine: (ha-782425-m02) Calling .PreCreateCheck
	I0829 18:26:24.582114   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetConfigRaw
	I0829 18:26:24.582554   31894 main.go:141] libmachine: Creating machine...
	I0829 18:26:24.582572   31894 main.go:141] libmachine: (ha-782425-m02) Calling .Create
	I0829 18:26:24.582686   31894 main.go:141] libmachine: (ha-782425-m02) Creating KVM machine...
	I0829 18:26:24.583646   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found existing default KVM network
	I0829 18:26:24.583738   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found existing private KVM network mk-ha-782425
	I0829 18:26:24.583867   31894 main.go:141] libmachine: (ha-782425-m02) Setting up store path in /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02 ...
	I0829 18:26:24.583895   31894 main.go:141] libmachine: (ha-782425-m02) Building disk image from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 18:26:24.583942   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:24.583849   32252 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:26:24.584044   31894 main.go:141] libmachine: (ha-782425-m02) Downloading /home/jenkins/minikube-integration/19531-13056/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 18:26:24.812205   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:24.812048   32252 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa...
	I0829 18:26:25.012329   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:25.012158   32252 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/ha-782425-m02.rawdisk...
	I0829 18:26:25.012369   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Writing magic tar header
	I0829 18:26:25.012391   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Writing SSH key tar header
	I0829 18:26:25.012404   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:25.012268   32252 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02 ...
	I0829 18:26:25.012417   31894 main.go:141] libmachine: (ha-782425-m02) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02 (perms=drwx------)
	I0829 18:26:25.012433   31894 main.go:141] libmachine: (ha-782425-m02) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:26:25.012444   31894 main.go:141] libmachine: (ha-782425-m02) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube (perms=drwxr-xr-x)
	I0829 18:26:25.012458   31894 main.go:141] libmachine: (ha-782425-m02) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056 (perms=drwxrwxr-x)
	I0829 18:26:25.012479   31894 main.go:141] libmachine: (ha-782425-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:26:25.012497   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02
	I0829 18:26:25.012509   31894 main.go:141] libmachine: (ha-782425-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:26:25.012529   31894 main.go:141] libmachine: (ha-782425-m02) Creating domain...
	I0829 18:26:25.012556   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines
	I0829 18:26:25.012571   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:26:25.012601   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056
	I0829 18:26:25.012625   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:26:25.012636   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:26:25.012646   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home
	I0829 18:26:25.012658   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Skipping /home - not owner
	I0829 18:26:25.013572   31894 main.go:141] libmachine: (ha-782425-m02) define libvirt domain using xml: 
	I0829 18:26:25.013596   31894 main.go:141] libmachine: (ha-782425-m02) <domain type='kvm'>
	I0829 18:26:25.013608   31894 main.go:141] libmachine: (ha-782425-m02)   <name>ha-782425-m02</name>
	I0829 18:26:25.013616   31894 main.go:141] libmachine: (ha-782425-m02)   <memory unit='MiB'>2200</memory>
	I0829 18:26:25.013645   31894 main.go:141] libmachine: (ha-782425-m02)   <vcpu>2</vcpu>
	I0829 18:26:25.013657   31894 main.go:141] libmachine: (ha-782425-m02)   <features>
	I0829 18:26:25.013666   31894 main.go:141] libmachine: (ha-782425-m02)     <acpi/>
	I0829 18:26:25.013677   31894 main.go:141] libmachine: (ha-782425-m02)     <apic/>
	I0829 18:26:25.013688   31894 main.go:141] libmachine: (ha-782425-m02)     <pae/>
	I0829 18:26:25.013699   31894 main.go:141] libmachine: (ha-782425-m02)     
	I0829 18:26:25.013720   31894 main.go:141] libmachine: (ha-782425-m02)   </features>
	I0829 18:26:25.013735   31894 main.go:141] libmachine: (ha-782425-m02)   <cpu mode='host-passthrough'>
	I0829 18:26:25.013741   31894 main.go:141] libmachine: (ha-782425-m02)   
	I0829 18:26:25.013747   31894 main.go:141] libmachine: (ha-782425-m02)   </cpu>
	I0829 18:26:25.013755   31894 main.go:141] libmachine: (ha-782425-m02)   <os>
	I0829 18:26:25.013759   31894 main.go:141] libmachine: (ha-782425-m02)     <type>hvm</type>
	I0829 18:26:25.013764   31894 main.go:141] libmachine: (ha-782425-m02)     <boot dev='cdrom'/>
	I0829 18:26:25.013771   31894 main.go:141] libmachine: (ha-782425-m02)     <boot dev='hd'/>
	I0829 18:26:25.013777   31894 main.go:141] libmachine: (ha-782425-m02)     <bootmenu enable='no'/>
	I0829 18:26:25.013784   31894 main.go:141] libmachine: (ha-782425-m02)   </os>
	I0829 18:26:25.013789   31894 main.go:141] libmachine: (ha-782425-m02)   <devices>
	I0829 18:26:25.013797   31894 main.go:141] libmachine: (ha-782425-m02)     <disk type='file' device='cdrom'>
	I0829 18:26:25.013806   31894 main.go:141] libmachine: (ha-782425-m02)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/boot2docker.iso'/>
	I0829 18:26:25.013816   31894 main.go:141] libmachine: (ha-782425-m02)       <target dev='hdc' bus='scsi'/>
	I0829 18:26:25.013847   31894 main.go:141] libmachine: (ha-782425-m02)       <readonly/>
	I0829 18:26:25.013869   31894 main.go:141] libmachine: (ha-782425-m02)     </disk>
	I0829 18:26:25.013882   31894 main.go:141] libmachine: (ha-782425-m02)     <disk type='file' device='disk'>
	I0829 18:26:25.013897   31894 main.go:141] libmachine: (ha-782425-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:26:25.013914   31894 main.go:141] libmachine: (ha-782425-m02)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/ha-782425-m02.rawdisk'/>
	I0829 18:26:25.013926   31894 main.go:141] libmachine: (ha-782425-m02)       <target dev='hda' bus='virtio'/>
	I0829 18:26:25.013938   31894 main.go:141] libmachine: (ha-782425-m02)     </disk>
	I0829 18:26:25.013960   31894 main.go:141] libmachine: (ha-782425-m02)     <interface type='network'>
	I0829 18:26:25.013974   31894 main.go:141] libmachine: (ha-782425-m02)       <source network='mk-ha-782425'/>
	I0829 18:26:25.013985   31894 main.go:141] libmachine: (ha-782425-m02)       <model type='virtio'/>
	I0829 18:26:25.013996   31894 main.go:141] libmachine: (ha-782425-m02)     </interface>
	I0829 18:26:25.014007   31894 main.go:141] libmachine: (ha-782425-m02)     <interface type='network'>
	I0829 18:26:25.014018   31894 main.go:141] libmachine: (ha-782425-m02)       <source network='default'/>
	I0829 18:26:25.014029   31894 main.go:141] libmachine: (ha-782425-m02)       <model type='virtio'/>
	I0829 18:26:25.014041   31894 main.go:141] libmachine: (ha-782425-m02)     </interface>
	I0829 18:26:25.014051   31894 main.go:141] libmachine: (ha-782425-m02)     <serial type='pty'>
	I0829 18:26:25.014073   31894 main.go:141] libmachine: (ha-782425-m02)       <target port='0'/>
	I0829 18:26:25.014081   31894 main.go:141] libmachine: (ha-782425-m02)     </serial>
	I0829 18:26:25.014102   31894 main.go:141] libmachine: (ha-782425-m02)     <console type='pty'>
	I0829 18:26:25.014121   31894 main.go:141] libmachine: (ha-782425-m02)       <target type='serial' port='0'/>
	I0829 18:26:25.014137   31894 main.go:141] libmachine: (ha-782425-m02)     </console>
	I0829 18:26:25.014150   31894 main.go:141] libmachine: (ha-782425-m02)     <rng model='virtio'>
	I0829 18:26:25.014161   31894 main.go:141] libmachine: (ha-782425-m02)       <backend model='random'>/dev/random</backend>
	I0829 18:26:25.014169   31894 main.go:141] libmachine: (ha-782425-m02)     </rng>
	I0829 18:26:25.014176   31894 main.go:141] libmachine: (ha-782425-m02)     
	I0829 18:26:25.014187   31894 main.go:141] libmachine: (ha-782425-m02)     
	I0829 18:26:25.014198   31894 main.go:141] libmachine: (ha-782425-m02)   </devices>
	I0829 18:26:25.014209   31894 main.go:141] libmachine: (ha-782425-m02) </domain>
	I0829 18:26:25.014222   31894 main.go:141] libmachine: (ha-782425-m02) 
	I0829 18:26:25.020795   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:87:5f:42 in network default
	I0829 18:26:25.021324   31894 main.go:141] libmachine: (ha-782425-m02) Ensuring networks are active...
	I0829 18:26:25.021348   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:25.022028   31894 main.go:141] libmachine: (ha-782425-m02) Ensuring network default is active
	I0829 18:26:25.022391   31894 main.go:141] libmachine: (ha-782425-m02) Ensuring network mk-ha-782425 is active
	I0829 18:26:25.022758   31894 main.go:141] libmachine: (ha-782425-m02) Getting domain xml...
	I0829 18:26:25.023485   31894 main.go:141] libmachine: (ha-782425-m02) Creating domain...
	I0829 18:26:26.229097   31894 main.go:141] libmachine: (ha-782425-m02) Waiting to get IP...
	I0829 18:26:26.229953   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:26.230456   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:26.230482   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:26.230409   32252 retry.go:31] will retry after 237.142818ms: waiting for machine to come up
	I0829 18:26:26.469824   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:26.470329   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:26.470361   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:26.470277   32252 retry.go:31] will retry after 242.315813ms: waiting for machine to come up
	I0829 18:26:26.713718   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:26.714266   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:26.714296   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:26.714217   32252 retry.go:31] will retry after 341.179806ms: waiting for machine to come up
	I0829 18:26:27.056776   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:27.057265   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:27.057294   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:27.057217   32252 retry.go:31] will retry after 595.192989ms: waiting for machine to come up
	I0829 18:26:27.653881   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:27.654386   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:27.654424   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:27.654332   32252 retry.go:31] will retry after 521.996873ms: waiting for machine to come up
	I0829 18:26:28.177994   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:28.178365   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:28.178393   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:28.178331   32252 retry.go:31] will retry after 887.019406ms: waiting for machine to come up
	I0829 18:26:29.067331   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:29.067765   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:29.067802   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:29.067761   32252 retry.go:31] will retry after 881.071096ms: waiting for machine to come up
	I0829 18:26:29.949908   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:29.950225   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:29.950246   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:29.950203   32252 retry.go:31] will retry after 971.946782ms: waiting for machine to come up
	I0829 18:26:30.924291   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:30.924673   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:30.924707   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:30.924637   32252 retry.go:31] will retry after 1.32152902s: waiting for machine to come up
	I0829 18:26:32.248043   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:32.248448   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:32.248474   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:32.248405   32252 retry.go:31] will retry after 1.905467671s: waiting for machine to come up
	I0829 18:26:34.155199   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:34.155548   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:34.155578   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:34.155497   32252 retry.go:31] will retry after 2.896327126s: waiting for machine to come up
	I0829 18:26:37.054991   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:37.055413   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:37.055457   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:37.055369   32252 retry.go:31] will retry after 2.938271443s: waiting for machine to come up
	I0829 18:26:39.995460   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:39.995861   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:39.995887   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:39.995826   32252 retry.go:31] will retry after 3.097722772s: waiting for machine to come up
	I0829 18:26:43.095812   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:43.096180   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:43.096202   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:43.096138   32252 retry.go:31] will retry after 5.653782019s: waiting for machine to come up
	I0829 18:26:48.754518   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:48.754970   31894 main.go:141] libmachine: (ha-782425-m02) Found IP for machine: 192.168.39.253
	I0829 18:26:48.754996   31894 main.go:141] libmachine: (ha-782425-m02) Reserving static IP address...
	I0829 18:26:48.755009   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has current primary IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:48.755387   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find host DHCP lease matching {name: "ha-782425-m02", mac: "52:54:00:15:79:c5", ip: "192.168.39.253"} in network mk-ha-782425
	I0829 18:26:48.824716   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Getting to WaitForSSH function...
	I0829 18:26:48.824744   31894 main.go:141] libmachine: (ha-782425-m02) Reserved static IP address: 192.168.39.253
	I0829 18:26:48.824757   31894 main.go:141] libmachine: (ha-782425-m02) Waiting for SSH to be available...
	I0829 18:26:48.827487   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:48.827905   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:48.827937   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:48.828060   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Using SSH client type: external
	I0829 18:26:48.828083   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa (-rw-------)
	I0829 18:26:48.828111   31894 main.go:141] libmachine: (ha-782425-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:26:48.828124   31894 main.go:141] libmachine: (ha-782425-m02) DBG | About to run SSH command:
	I0829 18:26:48.828213   31894 main.go:141] libmachine: (ha-782425-m02) DBG | exit 0
	I0829 18:26:48.950130   31894 main.go:141] libmachine: (ha-782425-m02) DBG | SSH cmd err, output: <nil>: 
	I0829 18:26:48.950378   31894 main.go:141] libmachine: (ha-782425-m02) KVM machine creation complete!
	I0829 18:26:48.950774   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetConfigRaw
	I0829 18:26:48.951236   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:48.951416   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:48.951620   31894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:26:48.951640   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:26:48.952783   31894 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:26:48.952795   31894 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:26:48.952800   31894 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:26:48.952806   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:48.955023   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:48.955373   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:48.955400   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:48.955530   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:48.955707   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:48.955859   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:48.956021   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:48.956191   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:48.956388   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0829 18:26:48.956397   31894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:26:49.057053   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:26:49.057081   31894 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:26:49.057092   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.059825   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.060176   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.060198   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.060366   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:49.060522   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.060689   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.060816   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:49.060948   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:49.061103   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0829 18:26:49.061114   31894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:26:49.158598   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:26:49.158654   31894 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:26:49.158661   31894 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:26:49.158668   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetMachineName
	I0829 18:26:49.158943   31894 buildroot.go:166] provisioning hostname "ha-782425-m02"
	I0829 18:26:49.158973   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetMachineName
	I0829 18:26:49.159180   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.161715   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.162138   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.162164   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.162301   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:49.162472   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.162613   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.162734   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:49.162859   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:49.163113   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0829 18:26:49.163135   31894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-782425-m02 && echo "ha-782425-m02" | sudo tee /etc/hostname
	I0829 18:26:49.271395   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-782425-m02
	
	I0829 18:26:49.271419   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.274146   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.274575   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.274606   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.274764   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:49.274952   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.275078   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.275243   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:49.275399   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:49.275553   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0829 18:26:49.275567   31894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-782425-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-782425-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-782425-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:26:49.378107   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:26:49.378139   31894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:26:49.378155   31894 buildroot.go:174] setting up certificates
	I0829 18:26:49.378162   31894 provision.go:84] configureAuth start
	I0829 18:26:49.378170   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetMachineName
	I0829 18:26:49.378449   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:26:49.381117   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.381453   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.381485   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.381615   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.383655   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.383942   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.383963   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.384090   31894 provision.go:143] copyHostCerts
	I0829 18:26:49.384120   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:26:49.384149   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 18:26:49.384158   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:26:49.384221   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:26:49.384290   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:26:49.384307   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 18:26:49.384314   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:26:49.384338   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:26:49.384382   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:26:49.384400   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 18:26:49.384406   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:26:49.384425   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:26:49.384472   31894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.ha-782425-m02 san=[127.0.0.1 192.168.39.253 ha-782425-m02 localhost minikube]
	I0829 18:26:49.532968   31894 provision.go:177] copyRemoteCerts
	I0829 18:26:49.533025   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:26:49.533048   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.535572   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.535900   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.535929   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.536080   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:49.536237   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.536361   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:49.536456   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	I0829 18:26:49.611693   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 18:26:49.611749   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:26:49.634177   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 18:26:49.634250   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:26:49.658566   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 18:26:49.658661   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:26:49.683473   31894 provision.go:87] duration metric: took 305.298786ms to configureAuth
	I0829 18:26:49.683495   31894 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:26:49.683689   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:26:49.683765   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.686349   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.686849   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.686885   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.687061   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:49.687228   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.687354   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.687470   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:49.687658   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:49.687843   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0829 18:26:49.687859   31894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:26:49.896518   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:26:49.896541   31894 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:26:49.896551   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetURL
	I0829 18:26:49.897762   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Using libvirt version 6000000
	I0829 18:26:49.899894   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.900353   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.900387   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.900522   31894 main.go:141] libmachine: Docker is up and running!
	I0829 18:26:49.900537   31894 main.go:141] libmachine: Reticulating splines...
	I0829 18:26:49.900544   31894 client.go:171] duration metric: took 25.318847548s to LocalClient.Create
	I0829 18:26:49.900564   31894 start.go:167] duration metric: took 25.318905692s to libmachine.API.Create "ha-782425"
	I0829 18:26:49.900575   31894 start.go:293] postStartSetup for "ha-782425-m02" (driver="kvm2")
	I0829 18:26:49.900588   31894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:26:49.900617   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:49.900833   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:26:49.900856   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.903094   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.903457   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.903483   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.903600   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:49.903780   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.903938   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:49.904071   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	I0829 18:26:49.979923   31894 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:26:49.983726   31894 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:26:49.983748   31894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 18:26:49.983804   31894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 18:26:49.983870   31894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 18:26:49.983880   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /etc/ssl/certs/202592.pem
	I0829 18:26:49.983955   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 18:26:49.992355   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:26:50.013967   31894 start.go:296] duration metric: took 113.380706ms for postStartSetup
	I0829 18:26:50.014019   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetConfigRaw
	I0829 18:26:50.014605   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:26:50.017312   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.017650   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:50.017671   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.017867   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:26:50.018069   31894 start.go:128] duration metric: took 25.454075609s to createHost
	I0829 18:26:50.018104   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:50.020313   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.020652   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:50.020675   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.020813   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:50.020971   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:50.021108   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:50.021259   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:50.021420   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:50.021615   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0829 18:26:50.021627   31894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:26:50.114540   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724956010.095185222
	
	I0829 18:26:50.114564   31894 fix.go:216] guest clock: 1724956010.095185222
	I0829 18:26:50.114573   31894 fix.go:229] Guest: 2024-08-29 18:26:50.095185222 +0000 UTC Remote: 2024-08-29 18:26:50.018079841 +0000 UTC m=+72.186075366 (delta=77.105381ms)
	I0829 18:26:50.114605   31894 fix.go:200] guest clock delta is within tolerance: 77.105381ms
	I0829 18:26:50.114612   31894 start.go:83] releasing machines lock for "ha-782425-m02", held for 25.550749818s
	I0829 18:26:50.114634   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:50.114882   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:26:50.117266   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.117616   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:50.117645   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.119624   31894 out.go:177] * Found network options:
	I0829 18:26:50.120677   31894 out.go:177]   - NO_PROXY=192.168.39.39
	W0829 18:26:50.121590   31894 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 18:26:50.121613   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:50.122163   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:50.122361   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:50.122475   31894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:26:50.122508   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	W0829 18:26:50.122535   31894 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 18:26:50.122608   31894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:26:50.122626   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:50.125046   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.125190   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.125427   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:50.125452   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.125553   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:50.125656   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:50.125692   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.125754   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:50.125826   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:50.125894   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:50.126034   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:50.126052   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	I0829 18:26:50.126217   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:50.126372   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	I0829 18:26:50.349617   31894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:26:50.355355   31894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:26:50.355428   31894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:26:50.370751   31894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:26:50.370778   31894 start.go:495] detecting cgroup driver to use...
	I0829 18:26:50.370852   31894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:26:50.385898   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:26:50.399592   31894 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:26:50.399667   31894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:26:50.413250   31894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:26:50.427350   31894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:26:50.541879   31894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:26:50.692562   31894 docker.go:233] disabling docker service ...
	I0829 18:26:50.692650   31894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:26:50.707727   31894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:26:50.720199   31894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:26:50.866477   31894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:26:50.989936   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:26:51.003683   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:26:51.023184   31894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:26:51.023256   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.032770   31894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:26:51.032828   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.042672   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.052846   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.062397   31894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:26:51.072081   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.081582   31894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.098364   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.108109   31894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:26:51.117022   31894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 18:26:51.117077   31894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 18:26:51.128752   31894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:26:51.137880   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:26:51.261126   31894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:26:51.347424   31894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:26:51.347554   31894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:26:51.352210   31894 start.go:563] Will wait 60s for crictl version
	I0829 18:26:51.352272   31894 ssh_runner.go:195] Run: which crictl
	I0829 18:26:51.355953   31894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:26:51.391213   31894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:26:51.391285   31894 ssh_runner.go:195] Run: crio --version
	I0829 18:26:51.418270   31894 ssh_runner.go:195] Run: crio --version
	I0829 18:26:51.445893   31894 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:26:51.447167   31894 out.go:177]   - env NO_PROXY=192.168.39.39
	I0829 18:26:51.448349   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:26:51.450818   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:51.451141   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:51.451169   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:51.451372   31894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:26:51.455456   31894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:26:51.467458   31894 mustload.go:65] Loading cluster: ha-782425
	I0829 18:26:51.467649   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:26:51.467904   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:51.467937   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:51.482321   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43063
	I0829 18:26:51.482756   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:51.483190   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:51.483210   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:51.483572   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:51.483755   31894 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:26:51.485349   31894 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:26:51.485627   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:51.485686   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:51.500890   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33409
	I0829 18:26:51.501294   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:51.501713   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:51.501740   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:51.502059   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:51.502268   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:51.502424   31894 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425 for IP: 192.168.39.253
	I0829 18:26:51.502438   31894 certs.go:194] generating shared ca certs ...
	I0829 18:26:51.502456   31894 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:51.502597   31894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 18:26:51.502643   31894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 18:26:51.502653   31894 certs.go:256] generating profile certs ...
	I0829 18:26:51.502720   31894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key
	I0829 18:26:51.502744   31894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.f45910f6
	I0829 18:26:51.502756   31894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.f45910f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.39 192.168.39.253 192.168.39.254]
	I0829 18:26:51.698684   31894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.f45910f6 ...
	I0829 18:26:51.698716   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.f45910f6: {Name:mkf0e9d9ffd254e920b63ad96df28873faca93cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:51.698891   31894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.f45910f6 ...
	I0829 18:26:51.698904   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.f45910f6: {Name:mk6960e3e0d1e62eafe3259930954d26962a40f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:51.698983   31894 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.f45910f6 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt
	I0829 18:26:51.699126   31894 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.f45910f6 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key
	I0829 18:26:51.699258   31894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key
	I0829 18:26:51.699276   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 18:26:51.699290   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 18:26:51.699312   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 18:26:51.699328   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 18:26:51.699343   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 18:26:51.699358   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 18:26:51.699373   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 18:26:51.699388   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 18:26:51.699441   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 18:26:51.699473   31894 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 18:26:51.699483   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:26:51.699509   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:26:51.699540   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:26:51.699565   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 18:26:51.699606   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:26:51.699634   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:51.699651   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem -> /usr/share/ca-certificates/20259.pem
	I0829 18:26:51.699665   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /usr/share/ca-certificates/202592.pem
	I0829 18:26:51.699699   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:51.702662   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:51.703051   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:51.703077   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:51.703281   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:51.703469   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:51.703636   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:51.703777   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:51.778482   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0829 18:26:51.783452   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0829 18:26:51.794645   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0829 18:26:51.805231   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0829 18:26:51.817768   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0829 18:26:51.821821   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0829 18:26:51.833444   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0829 18:26:51.838413   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0829 18:26:51.851612   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0829 18:26:51.860669   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0829 18:26:51.872429   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0829 18:26:51.876283   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0829 18:26:51.887468   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:26:51.911598   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:26:51.933833   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:26:51.955722   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:26:51.976904   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0829 18:26:51.997635   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 18:26:52.019051   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:26:52.040223   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:26:52.061308   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:26:52.082293   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 18:26:52.103597   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 18:26:52.125881   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0829 18:26:52.142365   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0829 18:26:52.157971   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0829 18:26:52.178944   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0829 18:26:52.195384   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0829 18:26:52.210359   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0829 18:26:52.226862   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0829 18:26:52.241892   31894 ssh_runner.go:195] Run: openssl version
	I0829 18:26:52.247232   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:26:52.257482   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:52.261899   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:52.261957   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:52.267217   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:26:52.277075   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 18:26:52.287868   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 18:26:52.292034   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 18:26:52.292087   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 18:26:52.297500   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 18:26:52.307519   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 18:26:52.317521   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 18:26:52.321727   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 18:26:52.321778   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 18:26:52.327184   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 18:26:52.337920   31894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:26:52.341915   31894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:26:52.341977   31894 kubeadm.go:934] updating node {m02 192.168.39.253 8443 v1.31.0 crio true true} ...
	I0829 18:26:52.342064   31894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-782425-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:26:52.342119   31894 kube-vip.go:115] generating kube-vip config ...
	I0829 18:26:52.342166   31894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 18:26:52.359964   31894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 18:26:52.360047   31894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0829 18:26:52.360114   31894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:26:52.369722   31894 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0829 18:26:52.369812   31894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0829 18:26:52.378997   31894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0829 18:26:52.379029   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 18:26:52.379043   31894 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0829 18:26:52.379102   31894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 18:26:52.379046   31894 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0829 18:26:52.383270   31894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0829 18:26:52.383302   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0829 18:26:53.331385   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 18:26:53.331488   31894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 18:26:53.336704   31894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0829 18:26:53.336745   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0829 18:26:53.471271   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:26:53.507741   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 18:26:53.507857   31894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 18:26:53.523679   31894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0829 18:26:53.523720   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0829 18:26:53.864618   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0829 18:26:53.874698   31894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0829 18:26:53.890036   31894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:26:53.905409   31894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0829 18:26:53.920758   31894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 18:26:53.924420   31894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:26:53.936824   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:26:54.059981   31894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:26:54.076111   31894 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:26:54.076445   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:54.076492   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:54.091747   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40341
	I0829 18:26:54.092196   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:54.092730   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:54.092755   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:54.093141   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:54.093353   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:54.093507   31894 start.go:317] joinCluster: &{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:26:54.093623   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0829 18:26:54.093649   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:54.096423   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:54.096918   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:54.096944   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:54.097130   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:54.097307   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:54.097457   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:54.097586   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:54.239537   31894 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:26:54.239582   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0avp5s.23nn67rbaqfsi40a --discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-782425-m02 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443"
	I0829 18:27:15.021821   31894 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0avp5s.23nn67rbaqfsi40a --discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-782425-m02 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443": (20.782191808s)
	I0829 18:27:15.021888   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0829 18:27:15.612755   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-782425-m02 minikube.k8s.io/updated_at=2024_08_29T18_27_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=ha-782425 minikube.k8s.io/primary=false
	I0829 18:27:15.731860   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-782425-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0829 18:27:15.862540   31894 start.go:319] duration metric: took 21.769029029s to joinCluster
	I0829 18:27:15.862630   31894 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:27:15.862962   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:27:15.864096   31894 out.go:177] * Verifying Kubernetes components...
	I0829 18:27:15.865276   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:27:16.124824   31894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:27:16.172898   31894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:27:16.173244   31894 kapi.go:59] client config for ha-782425: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.crt", KeyFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key", CAFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0829 18:27:16.173320   31894 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.39:8443
	I0829 18:27:16.173613   31894 node_ready.go:35] waiting up to 6m0s for node "ha-782425-m02" to be "Ready" ...
	I0829 18:27:16.173770   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:16.173786   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:16.173796   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:16.173808   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:16.184897   31894 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0829 18:27:16.673841   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:16.673863   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:16.673871   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:16.673876   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:16.685724   31894 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0829 18:27:17.174652   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:17.174676   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:17.174685   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:17.174688   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:17.183591   31894 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0829 18:27:17.673859   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:17.673879   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:17.673888   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:17.673892   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:17.676930   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:18.173805   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:18.173828   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:18.173835   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:18.173839   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:18.177547   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:18.178015   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:18.674084   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:18.674122   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:18.674130   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:18.674135   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:18.677314   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:19.174530   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:19.174558   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:19.174569   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:19.174574   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:19.178294   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:19.674721   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:19.674748   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:19.674756   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:19.674759   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:19.678013   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:20.174266   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:20.174293   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:20.174309   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:20.174316   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:20.177447   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:20.178203   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:20.674531   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:20.674550   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:20.674558   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:20.674562   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:20.677874   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:21.174774   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:21.174800   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:21.174812   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:21.174818   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:21.179000   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:21.673783   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:21.673806   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:21.673816   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:21.673824   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:21.677004   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:22.173909   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:22.173934   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:22.173942   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:22.173947   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:22.193187   31894 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0829 18:27:22.193743   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:22.673989   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:22.674020   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:22.674032   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:22.674038   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:22.680063   31894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0829 18:27:23.174433   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:23.174453   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:23.174461   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:23.174466   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:23.177961   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:23.674354   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:23.674378   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:23.674390   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:23.674398   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:23.680636   31894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0829 18:27:24.173781   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:24.173805   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:24.173814   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:24.173821   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:24.177001   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:24.674828   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:24.674851   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:24.674859   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:24.674863   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:24.678063   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:24.678649   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:25.173897   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:25.173919   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:25.173927   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:25.173935   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:25.176807   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:25.674819   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:25.674846   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:25.674857   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:25.674863   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:25.677776   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:26.173778   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:26.173801   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:26.173809   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:26.173812   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:26.176825   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:26.674798   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:26.674821   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:26.674830   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:26.674834   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:26.677805   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:27.174452   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:27.174478   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:27.174488   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:27.174492   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:27.177827   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:27.178363   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:27.674779   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:27.674802   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:27.674809   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:27.674814   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:27.677457   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:28.173974   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:28.173992   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:28.173999   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:28.174002   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:28.176837   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:28.674759   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:28.674782   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:28.674790   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:28.674795   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:28.678731   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:29.173798   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:29.173817   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:29.173825   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:29.173828   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:29.176964   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:29.674283   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:29.674305   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:29.674312   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:29.674318   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:29.677442   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:29.677913   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:30.174410   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:30.174433   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:30.174445   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:30.174452   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:30.177537   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:30.674265   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:30.674289   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:30.674297   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:30.674300   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:30.677897   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:31.173978   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:31.174003   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:31.174011   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:31.174016   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:31.177139   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:31.674159   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:31.674181   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:31.674190   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:31.674194   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:31.677204   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:32.174257   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:32.174280   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:32.174288   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:32.174291   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:32.179759   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:27:32.180267   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:32.674573   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:32.674599   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:32.674611   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:32.674618   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:32.678139   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:33.174708   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:33.174730   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:33.174738   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:33.174742   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:33.177578   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:33.674591   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:33.674614   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:33.674622   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:33.674625   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:33.678068   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:34.174786   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:34.174809   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.174817   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.174820   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.178381   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:34.178993   31894 node_ready.go:49] node "ha-782425-m02" has status "Ready":"True"
	I0829 18:27:34.179011   31894 node_ready.go:38] duration metric: took 18.005376284s for node "ha-782425-m02" to be "Ready" ...
	I0829 18:27:34.179020   31894 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:27:34.179102   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:27:34.179115   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.179122   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.179127   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.183202   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:34.191791   31894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nw2x2" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.191876   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-nw2x2
	I0829 18:27:34.191887   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.191896   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.191905   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.196079   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:34.196953   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:34.196970   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.196979   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.196986   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.199883   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.200457   31894 pod_ready.go:93] pod "coredns-6f6b679f8f-nw2x2" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:34.200476   31894 pod_ready.go:82] duration metric: took 8.659056ms for pod "coredns-6f6b679f8f-nw2x2" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.200486   31894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qhxm5" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.200548   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qhxm5
	I0829 18:27:34.200558   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.200565   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.200575   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.203309   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.203892   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:34.203908   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.203917   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.203923   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.206392   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.206857   31894 pod_ready.go:93] pod "coredns-6f6b679f8f-qhxm5" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:34.206873   31894 pod_ready.go:82] duration metric: took 6.38056ms for pod "coredns-6f6b679f8f-qhxm5" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.206882   31894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.206924   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/etcd-ha-782425
	I0829 18:27:34.206931   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.206938   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.206942   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.209466   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.210151   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:34.210167   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.210177   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.210182   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.212469   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.212939   31894 pod_ready.go:93] pod "etcd-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:34.212959   31894 pod_ready.go:82] duration metric: took 6.070221ms for pod "etcd-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.212970   31894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.213032   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/etcd-ha-782425-m02
	I0829 18:27:34.213042   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.213052   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.213060   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.215836   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.216488   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:34.216505   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.216515   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.216521   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.219029   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.219488   31894 pod_ready.go:93] pod "etcd-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:34.219505   31894 pod_ready.go:82] duration metric: took 6.524275ms for pod "etcd-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.219521   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.374829   31894 request.go:632] Waited for 155.237189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425
	I0829 18:27:34.374892   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425
	I0829 18:27:34.374899   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.374909   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.374918   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.378443   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:34.575302   31894 request.go:632] Waited for 196.186988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:34.575357   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:34.575363   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.575370   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.575374   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.578698   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:34.579088   31894 pod_ready.go:93] pod "kube-apiserver-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:34.579104   31894 pod_ready.go:82] duration metric: took 359.570997ms for pod "kube-apiserver-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.579112   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.775532   31894 request.go:632] Waited for 196.367952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425-m02
	I0829 18:27:34.775624   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425-m02
	I0829 18:27:34.775632   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.775639   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.775643   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.779877   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:34.975797   31894 request.go:632] Waited for 195.36549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:34.975880   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:34.975891   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.975901   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.975910   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.979290   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:34.979813   31894 pod_ready.go:93] pod "kube-apiserver-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:34.979831   31894 pod_ready.go:82] duration metric: took 400.713484ms for pod "kube-apiserver-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.979841   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:35.174927   31894 request.go:632] Waited for 195.018055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425
	I0829 18:27:35.174988   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425
	I0829 18:27:35.174992   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:35.175000   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:35.175004   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:35.178232   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:35.375378   31894 request.go:632] Waited for 196.371474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:35.375427   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:35.375433   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:35.375440   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:35.375445   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:35.378937   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:35.379567   31894 pod_ready.go:93] pod "kube-controller-manager-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:35.379588   31894 pod_ready.go:82] duration metric: took 399.738929ms for pod "kube-controller-manager-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:35.379604   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:35.575600   31894 request.go:632] Waited for 195.935535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425-m02
	I0829 18:27:35.575675   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425-m02
	I0829 18:27:35.575680   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:35.575688   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:35.575692   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:35.578977   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:35.775052   31894 request.go:632] Waited for 195.310084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:35.775107   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:35.775112   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:35.775119   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:35.775123   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:35.778281   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:35.778974   31894 pod_ready.go:93] pod "kube-controller-manager-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:35.778993   31894 pod_ready.go:82] duration metric: took 399.382265ms for pod "kube-controller-manager-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:35.779002   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5k8xr" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:35.975096   31894 request.go:632] Waited for 196.038385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5k8xr
	I0829 18:27:35.975191   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5k8xr
	I0829 18:27:35.975203   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:35.975214   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:35.975222   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:35.979773   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:36.174894   31894 request.go:632] Waited for 194.298911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:36.174953   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:36.174962   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:36.174973   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:36.174977   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:36.178216   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:36.178744   31894 pod_ready.go:93] pod "kube-proxy-5k8xr" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:36.178762   31894 pod_ready.go:82] duration metric: took 399.754717ms for pod "kube-proxy-5k8xr" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:36.178772   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d5kbx" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:36.375796   31894 request.go:632] Waited for 196.967983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5kbx
	I0829 18:27:36.375874   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5kbx
	I0829 18:27:36.375886   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:36.375896   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:36.375904   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:36.379499   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:36.575244   31894 request.go:632] Waited for 194.690586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:36.575296   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:36.575302   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:36.575309   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:36.575313   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:36.578693   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:36.579177   31894 pod_ready.go:93] pod "kube-proxy-d5kbx" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:36.579194   31894 pod_ready.go:82] duration metric: took 400.417285ms for pod "kube-proxy-d5kbx" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:36.579204   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:36.775441   31894 request.go:632] Waited for 196.152904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425
	I0829 18:27:36.775501   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425
	I0829 18:27:36.775506   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:36.775513   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:36.775520   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:36.779261   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:36.975299   31894 request.go:632] Waited for 195.363204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:36.975353   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:36.975359   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:36.975366   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:36.975371   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:36.978496   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:36.979108   31894 pod_ready.go:93] pod "kube-scheduler-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:36.979128   31894 pod_ready.go:82] duration metric: took 399.917184ms for pod "kube-scheduler-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:36.979139   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:37.175159   31894 request.go:632] Waited for 195.953066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425-m02
	I0829 18:27:37.175232   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425-m02
	I0829 18:27:37.175237   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:37.175244   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:37.175248   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:37.177949   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:37.375816   31894 request.go:632] Waited for 197.404743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:37.375886   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:37.375891   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:37.375899   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:37.375904   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:37.378860   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:37.379533   31894 pod_ready.go:93] pod "kube-scheduler-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:37.379552   31894 pod_ready.go:82] duration metric: took 400.406126ms for pod "kube-scheduler-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:37.379565   31894 pod_ready.go:39] duration metric: took 3.200534207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:27:37.379587   31894 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:27:37.379643   31894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:27:37.393015   31894 api_server.go:72] duration metric: took 21.530341114s to wait for apiserver process to appear ...
	I0829 18:27:37.393037   31894 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:27:37.393061   31894 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I0829 18:27:37.399471   31894 api_server.go:279] https://192.168.39.39:8443/healthz returned 200:
	ok
	I0829 18:27:37.399528   31894 round_trippers.go:463] GET https://192.168.39.39:8443/version
	I0829 18:27:37.399535   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:37.399543   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:37.399548   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:37.400367   31894 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0829 18:27:37.400474   31894 api_server.go:141] control plane version: v1.31.0
	I0829 18:27:37.400492   31894 api_server.go:131] duration metric: took 7.448915ms to wait for apiserver health ...
	I0829 18:27:37.400499   31894 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:27:37.574794   31894 request.go:632] Waited for 174.234454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:27:37.574875   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:27:37.574884   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:37.574893   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:37.574897   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:37.580231   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:27:37.584653   31894 system_pods.go:59] 17 kube-system pods found
	I0829 18:27:37.584686   31894 system_pods.go:61] "coredns-6f6b679f8f-nw2x2" [ab54ce43-4bd7-43ff-aad9-5cac2beb035b] Running
	I0829 18:27:37.584691   31894 system_pods.go:61] "coredns-6f6b679f8f-qhxm5" [286ec4e7-9401-4bdd-b8b2-86f00f130fc2] Running
	I0829 18:27:37.584696   31894 system_pods.go:61] "etcd-ha-782425" [743c3f2f-c86c-4f74-a7ef-9c95c0af0857] Running
	I0829 18:27:37.584699   31894 system_pods.go:61] "etcd-ha-782425-m02" [e70a5056-2675-48cf-8275-a630a1086c60] Running
	I0829 18:27:37.584702   31894 system_pods.go:61] "kindnet-7l5kn" [1a9ac71b-acaf-4ac9-b330-943525137d23] Running
	I0829 18:27:37.584705   31894 system_pods.go:61] "kindnet-kw2zk" [61a4cb33-47d5-4dd2-8711-d2524cf1133c] Running
	I0829 18:27:37.584708   31894 system_pods.go:61] "kube-apiserver-ha-782425" [b51e7db3-35e5-4e46-aeb4-9e98bfecd2a3] Running
	I0829 18:27:37.584711   31894 system_pods.go:61] "kube-apiserver-ha-782425-m02" [c1faa8f8-b5fd-41e7-bee3-dcdd6f4f06cc] Running
	I0829 18:27:37.584715   31894 system_pods.go:61] "kube-controller-manager-ha-782425" [008c32bf-b8f4-4cbe-a550-3820a3980f8f] Running
	I0829 18:27:37.584721   31894 system_pods.go:61] "kube-controller-manager-ha-782425-m02" [fcfc6d1d-ef6d-4b04-a86f-08d92de0883e] Running
	I0829 18:27:37.584724   31894 system_pods.go:61] "kube-proxy-5k8xr" [d07a092c-2a97-4bc5-ba9e-f0bf1022df8e] Running
	I0829 18:27:37.584727   31894 system_pods.go:61] "kube-proxy-d5kbx" [9033b7fd-0da5-4558-8c52-0ba06a7a4704] Running
	I0829 18:27:37.584730   31894 system_pods.go:61] "kube-scheduler-ha-782425" [72ba768c-61dd-4c95-a640-cdc3782b6f4c] Running
	I0829 18:27:37.584735   31894 system_pods.go:61] "kube-scheduler-ha-782425-m02" [56fa0075-25e4-42b7-b7b1-1b6d55643fcd] Running
	I0829 18:27:37.584738   31894 system_pods.go:61] "kube-vip-ha-782425" [83b3c3eb-b05b-47de-bc2a-ee1822b50b77] Running
	I0829 18:27:37.584741   31894 system_pods.go:61] "kube-vip-ha-782425-m02" [9655f7bc-ba21-4a7b-b223-18e52c655972] Running
	I0829 18:27:37.584744   31894 system_pods.go:61] "storage-provisioner" [f41ebca1-035e-44b0-96a2-3aa1e794bc1f] Running
	I0829 18:27:37.584751   31894 system_pods.go:74] duration metric: took 184.247241ms to wait for pod list to return data ...
	I0829 18:27:37.584758   31894 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:27:37.775208   31894 request.go:632] Waited for 190.357456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/default/serviceaccounts
	I0829 18:27:37.775264   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/default/serviceaccounts
	I0829 18:27:37.775269   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:37.775276   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:37.775281   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:37.779856   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:37.780083   31894 default_sa.go:45] found service account: "default"
	I0829 18:27:37.780099   31894 default_sa.go:55] duration metric: took 195.333777ms for default service account to be created ...
	I0829 18:27:37.780106   31894 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:27:37.975539   31894 request.go:632] Waited for 195.372955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:27:37.975592   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:27:37.975598   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:37.975605   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:37.975610   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:37.980062   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:37.984010   31894 system_pods.go:86] 17 kube-system pods found
	I0829 18:27:37.984039   31894 system_pods.go:89] "coredns-6f6b679f8f-nw2x2" [ab54ce43-4bd7-43ff-aad9-5cac2beb035b] Running
	I0829 18:27:37.984044   31894 system_pods.go:89] "coredns-6f6b679f8f-qhxm5" [286ec4e7-9401-4bdd-b8b2-86f00f130fc2] Running
	I0829 18:27:37.984048   31894 system_pods.go:89] "etcd-ha-782425" [743c3f2f-c86c-4f74-a7ef-9c95c0af0857] Running
	I0829 18:27:37.984052   31894 system_pods.go:89] "etcd-ha-782425-m02" [e70a5056-2675-48cf-8275-a630a1086c60] Running
	I0829 18:27:37.984055   31894 system_pods.go:89] "kindnet-7l5kn" [1a9ac71b-acaf-4ac9-b330-943525137d23] Running
	I0829 18:27:37.984058   31894 system_pods.go:89] "kindnet-kw2zk" [61a4cb33-47d5-4dd2-8711-d2524cf1133c] Running
	I0829 18:27:37.984062   31894 system_pods.go:89] "kube-apiserver-ha-782425" [b51e7db3-35e5-4e46-aeb4-9e98bfecd2a3] Running
	I0829 18:27:37.984065   31894 system_pods.go:89] "kube-apiserver-ha-782425-m02" [c1faa8f8-b5fd-41e7-bee3-dcdd6f4f06cc] Running
	I0829 18:27:37.984069   31894 system_pods.go:89] "kube-controller-manager-ha-782425" [008c32bf-b8f4-4cbe-a550-3820a3980f8f] Running
	I0829 18:27:37.984074   31894 system_pods.go:89] "kube-controller-manager-ha-782425-m02" [fcfc6d1d-ef6d-4b04-a86f-08d92de0883e] Running
	I0829 18:27:37.984077   31894 system_pods.go:89] "kube-proxy-5k8xr" [d07a092c-2a97-4bc5-ba9e-f0bf1022df8e] Running
	I0829 18:27:37.984080   31894 system_pods.go:89] "kube-proxy-d5kbx" [9033b7fd-0da5-4558-8c52-0ba06a7a4704] Running
	I0829 18:27:37.984083   31894 system_pods.go:89] "kube-scheduler-ha-782425" [72ba768c-61dd-4c95-a640-cdc3782b6f4c] Running
	I0829 18:27:37.984087   31894 system_pods.go:89] "kube-scheduler-ha-782425-m02" [56fa0075-25e4-42b7-b7b1-1b6d55643fcd] Running
	I0829 18:27:37.984092   31894 system_pods.go:89] "kube-vip-ha-782425" [83b3c3eb-b05b-47de-bc2a-ee1822b50b77] Running
	I0829 18:27:37.984097   31894 system_pods.go:89] "kube-vip-ha-782425-m02" [9655f7bc-ba21-4a7b-b223-18e52c655972] Running
	I0829 18:27:37.984100   31894 system_pods.go:89] "storage-provisioner" [f41ebca1-035e-44b0-96a2-3aa1e794bc1f] Running
	I0829 18:27:37.984109   31894 system_pods.go:126] duration metric: took 203.998182ms to wait for k8s-apps to be running ...
	I0829 18:27:37.984118   31894 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:27:37.984158   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:27:37.998975   31894 system_svc.go:56] duration metric: took 14.842358ms WaitForService to wait for kubelet
	I0829 18:27:37.999011   31894 kubeadm.go:582] duration metric: took 22.136338987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:27:37.999034   31894 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:27:38.175474   31894 request.go:632] Waited for 176.363823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes
	I0829 18:27:38.175542   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes
	I0829 18:27:38.175547   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:38.175555   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:38.175558   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:38.178967   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:38.179631   31894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:27:38.179654   31894 node_conditions.go:123] node cpu capacity is 2
	I0829 18:27:38.179663   31894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:27:38.179667   31894 node_conditions.go:123] node cpu capacity is 2
	I0829 18:27:38.179671   31894 node_conditions.go:105] duration metric: took 180.632421ms to run NodePressure ...
	I0829 18:27:38.179681   31894 start.go:241] waiting for startup goroutines ...
	I0829 18:27:38.179704   31894 start.go:255] writing updated cluster config ...
	I0829 18:27:38.181914   31894 out.go:201] 
	I0829 18:27:38.183620   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:27:38.183712   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:27:38.185446   31894 out.go:177] * Starting "ha-782425-m03" control-plane node in "ha-782425" cluster
	I0829 18:27:38.186630   31894 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:27:38.186655   31894 cache.go:56] Caching tarball of preloaded images
	I0829 18:27:38.186768   31894 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:27:38.186782   31894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:27:38.186867   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:27:38.187024   31894 start.go:360] acquireMachinesLock for ha-782425-m03: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:27:38.187066   31894 start.go:364] duration metric: took 24.034µs to acquireMachinesLock for "ha-782425-m03"
	I0829 18:27:38.187088   31894 start.go:93] Provisioning new machine with config: &{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:27:38.187190   31894 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0829 18:27:38.188663   31894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 18:27:38.188741   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:27:38.188775   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:27:38.203687   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46729
	I0829 18:27:38.204082   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:27:38.204533   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:27:38.204555   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:27:38.204845   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:27:38.205056   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetMachineName
	I0829 18:27:38.205175   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:27:38.205366   31894 start.go:159] libmachine.API.Create for "ha-782425" (driver="kvm2")
	I0829 18:27:38.205393   31894 client.go:168] LocalClient.Create starting
	I0829 18:27:38.205421   31894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem
	I0829 18:27:38.205454   31894 main.go:141] libmachine: Decoding PEM data...
	I0829 18:27:38.205469   31894 main.go:141] libmachine: Parsing certificate...
	I0829 18:27:38.205514   31894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem
	I0829 18:27:38.205532   31894 main.go:141] libmachine: Decoding PEM data...
	I0829 18:27:38.205542   31894 main.go:141] libmachine: Parsing certificate...
	I0829 18:27:38.205563   31894 main.go:141] libmachine: Running pre-create checks...
	I0829 18:27:38.205570   31894 main.go:141] libmachine: (ha-782425-m03) Calling .PreCreateCheck
	I0829 18:27:38.205701   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetConfigRaw
	I0829 18:27:38.206078   31894 main.go:141] libmachine: Creating machine...
	I0829 18:27:38.206106   31894 main.go:141] libmachine: (ha-782425-m03) Calling .Create
	I0829 18:27:38.206218   31894 main.go:141] libmachine: (ha-782425-m03) Creating KVM machine...
	I0829 18:27:38.207453   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found existing default KVM network
	I0829 18:27:38.207610   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found existing private KVM network mk-ha-782425
	I0829 18:27:38.207753   31894 main.go:141] libmachine: (ha-782425-m03) Setting up store path in /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03 ...
	I0829 18:27:38.207778   31894 main.go:141] libmachine: (ha-782425-m03) Building disk image from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 18:27:38.207833   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:38.207745   32645 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:27:38.207995   31894 main.go:141] libmachine: (ha-782425-m03) Downloading /home/jenkins/minikube-integration/19531-13056/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 18:27:38.434867   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:38.434751   32645 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa...
	I0829 18:27:39.031080   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:39.030952   32645 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/ha-782425-m03.rawdisk...
	I0829 18:27:39.031119   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Writing magic tar header
	I0829 18:27:39.031134   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Writing SSH key tar header
	I0829 18:27:39.031147   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:39.031066   32645 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03 ...
	I0829 18:27:39.031165   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03
	I0829 18:27:39.031209   31894 main.go:141] libmachine: (ha-782425-m03) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03 (perms=drwx------)
	I0829 18:27:39.031234   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines
	I0829 18:27:39.031245   31894 main.go:141] libmachine: (ha-782425-m03) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:27:39.031258   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:27:39.031271   31894 main.go:141] libmachine: (ha-782425-m03) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube (perms=drwxr-xr-x)
	I0829 18:27:39.031287   31894 main.go:141] libmachine: (ha-782425-m03) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056 (perms=drwxrwxr-x)
	I0829 18:27:39.031298   31894 main.go:141] libmachine: (ha-782425-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:27:39.031310   31894 main.go:141] libmachine: (ha-782425-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:27:39.031318   31894 main.go:141] libmachine: (ha-782425-m03) Creating domain...
	I0829 18:27:39.031329   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056
	I0829 18:27:39.031356   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:27:39.031367   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:27:39.031377   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home
	I0829 18:27:39.031385   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Skipping /home - not owner
	I0829 18:27:39.032300   31894 main.go:141] libmachine: (ha-782425-m03) define libvirt domain using xml: 
	I0829 18:27:39.032322   31894 main.go:141] libmachine: (ha-782425-m03) <domain type='kvm'>
	I0829 18:27:39.032329   31894 main.go:141] libmachine: (ha-782425-m03)   <name>ha-782425-m03</name>
	I0829 18:27:39.032344   31894 main.go:141] libmachine: (ha-782425-m03)   <memory unit='MiB'>2200</memory>
	I0829 18:27:39.032352   31894 main.go:141] libmachine: (ha-782425-m03)   <vcpu>2</vcpu>
	I0829 18:27:39.032359   31894 main.go:141] libmachine: (ha-782425-m03)   <features>
	I0829 18:27:39.032368   31894 main.go:141] libmachine: (ha-782425-m03)     <acpi/>
	I0829 18:27:39.032379   31894 main.go:141] libmachine: (ha-782425-m03)     <apic/>
	I0829 18:27:39.032387   31894 main.go:141] libmachine: (ha-782425-m03)     <pae/>
	I0829 18:27:39.032392   31894 main.go:141] libmachine: (ha-782425-m03)     
	I0829 18:27:39.032396   31894 main.go:141] libmachine: (ha-782425-m03)   </features>
	I0829 18:27:39.032401   31894 main.go:141] libmachine: (ha-782425-m03)   <cpu mode='host-passthrough'>
	I0829 18:27:39.032451   31894 main.go:141] libmachine: (ha-782425-m03)   
	I0829 18:27:39.032470   31894 main.go:141] libmachine: (ha-782425-m03)   </cpu>
	I0829 18:27:39.032481   31894 main.go:141] libmachine: (ha-782425-m03)   <os>
	I0829 18:27:39.032491   31894 main.go:141] libmachine: (ha-782425-m03)     <type>hvm</type>
	I0829 18:27:39.032502   31894 main.go:141] libmachine: (ha-782425-m03)     <boot dev='cdrom'/>
	I0829 18:27:39.032520   31894 main.go:141] libmachine: (ha-782425-m03)     <boot dev='hd'/>
	I0829 18:27:39.032533   31894 main.go:141] libmachine: (ha-782425-m03)     <bootmenu enable='no'/>
	I0829 18:27:39.032553   31894 main.go:141] libmachine: (ha-782425-m03)   </os>
	I0829 18:27:39.032567   31894 main.go:141] libmachine: (ha-782425-m03)   <devices>
	I0829 18:27:39.032577   31894 main.go:141] libmachine: (ha-782425-m03)     <disk type='file' device='cdrom'>
	I0829 18:27:39.032657   31894 main.go:141] libmachine: (ha-782425-m03)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/boot2docker.iso'/>
	I0829 18:27:39.032673   31894 main.go:141] libmachine: (ha-782425-m03)       <target dev='hdc' bus='scsi'/>
	I0829 18:27:39.032679   31894 main.go:141] libmachine: (ha-782425-m03)       <readonly/>
	I0829 18:27:39.032686   31894 main.go:141] libmachine: (ha-782425-m03)     </disk>
	I0829 18:27:39.032692   31894 main.go:141] libmachine: (ha-782425-m03)     <disk type='file' device='disk'>
	I0829 18:27:39.032702   31894 main.go:141] libmachine: (ha-782425-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:27:39.032712   31894 main.go:141] libmachine: (ha-782425-m03)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/ha-782425-m03.rawdisk'/>
	I0829 18:27:39.032720   31894 main.go:141] libmachine: (ha-782425-m03)       <target dev='hda' bus='virtio'/>
	I0829 18:27:39.032727   31894 main.go:141] libmachine: (ha-782425-m03)     </disk>
	I0829 18:27:39.032735   31894 main.go:141] libmachine: (ha-782425-m03)     <interface type='network'>
	I0829 18:27:39.032748   31894 main.go:141] libmachine: (ha-782425-m03)       <source network='mk-ha-782425'/>
	I0829 18:27:39.032759   31894 main.go:141] libmachine: (ha-782425-m03)       <model type='virtio'/>
	I0829 18:27:39.032770   31894 main.go:141] libmachine: (ha-782425-m03)     </interface>
	I0829 18:27:39.032780   31894 main.go:141] libmachine: (ha-782425-m03)     <interface type='network'>
	I0829 18:27:39.032792   31894 main.go:141] libmachine: (ha-782425-m03)       <source network='default'/>
	I0829 18:27:39.032803   31894 main.go:141] libmachine: (ha-782425-m03)       <model type='virtio'/>
	I0829 18:27:39.032813   31894 main.go:141] libmachine: (ha-782425-m03)     </interface>
	I0829 18:27:39.032846   31894 main.go:141] libmachine: (ha-782425-m03)     <serial type='pty'>
	I0829 18:27:39.032872   31894 main.go:141] libmachine: (ha-782425-m03)       <target port='0'/>
	I0829 18:27:39.032885   31894 main.go:141] libmachine: (ha-782425-m03)     </serial>
	I0829 18:27:39.032907   31894 main.go:141] libmachine: (ha-782425-m03)     <console type='pty'>
	I0829 18:27:39.032918   31894 main.go:141] libmachine: (ha-782425-m03)       <target type='serial' port='0'/>
	I0829 18:27:39.032931   31894 main.go:141] libmachine: (ha-782425-m03)     </console>
	I0829 18:27:39.032941   31894 main.go:141] libmachine: (ha-782425-m03)     <rng model='virtio'>
	I0829 18:27:39.032948   31894 main.go:141] libmachine: (ha-782425-m03)       <backend model='random'>/dev/random</backend>
	I0829 18:27:39.032960   31894 main.go:141] libmachine: (ha-782425-m03)     </rng>
	I0829 18:27:39.032970   31894 main.go:141] libmachine: (ha-782425-m03)     
	I0829 18:27:39.032981   31894 main.go:141] libmachine: (ha-782425-m03)     
	I0829 18:27:39.032996   31894 main.go:141] libmachine: (ha-782425-m03)   </devices>
	I0829 18:27:39.033007   31894 main.go:141] libmachine: (ha-782425-m03) </domain>
	I0829 18:27:39.033016   31894 main.go:141] libmachine: (ha-782425-m03) 
	I0829 18:27:39.039862   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:87:fd:da in network default
	I0829 18:27:39.040474   31894 main.go:141] libmachine: (ha-782425-m03) Ensuring networks are active...
	I0829 18:27:39.040503   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:39.041141   31894 main.go:141] libmachine: (ha-782425-m03) Ensuring network default is active
	I0829 18:27:39.041412   31894 main.go:141] libmachine: (ha-782425-m03) Ensuring network mk-ha-782425 is active
	I0829 18:27:39.041760   31894 main.go:141] libmachine: (ha-782425-m03) Getting domain xml...
	I0829 18:27:39.042459   31894 main.go:141] libmachine: (ha-782425-m03) Creating domain...
	I0829 18:27:40.284792   31894 main.go:141] libmachine: (ha-782425-m03) Waiting to get IP...
	I0829 18:27:40.285537   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:40.286073   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:40.286113   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:40.286065   32645 retry.go:31] will retry after 295.874325ms: waiting for machine to come up
	I0829 18:27:40.583804   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:40.584416   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:40.584452   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:40.584354   32645 retry.go:31] will retry after 349.576346ms: waiting for machine to come up
	I0829 18:27:40.935822   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:40.936255   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:40.936280   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:40.936208   32645 retry.go:31] will retry after 474.929638ms: waiting for machine to come up
	I0829 18:27:41.412903   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:41.413367   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:41.413394   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:41.413338   32645 retry.go:31] will retry after 540.983998ms: waiting for machine to come up
	I0829 18:27:41.956126   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:41.956649   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:41.956685   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:41.956599   32645 retry.go:31] will retry after 711.407523ms: waiting for machine to come up
	I0829 18:27:42.669344   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:42.669731   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:42.669759   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:42.669680   32645 retry.go:31] will retry after 803.960124ms: waiting for machine to come up
	I0829 18:27:43.475342   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:43.475775   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:43.475804   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:43.475738   32645 retry.go:31] will retry after 949.957391ms: waiting for machine to come up
	I0829 18:27:44.426840   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:44.427252   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:44.427276   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:44.427199   32645 retry.go:31] will retry after 1.186719918s: waiting for machine to come up
	I0829 18:27:45.615314   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:45.615690   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:45.615720   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:45.615636   32645 retry.go:31] will retry after 1.7690001s: waiting for machine to come up
	I0829 18:27:47.385868   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:47.386335   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:47.386364   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:47.386294   32645 retry.go:31] will retry after 1.504430849s: waiting for machine to come up
	I0829 18:27:48.891994   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:48.892463   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:48.892495   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:48.892411   32645 retry.go:31] will retry after 2.537725233s: waiting for machine to come up
	I0829 18:27:51.433157   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:51.433635   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:51.433658   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:51.433589   32645 retry.go:31] will retry after 2.650154903s: waiting for machine to come up
	I0829 18:27:54.085317   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:54.085702   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:54.085728   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:54.085655   32645 retry.go:31] will retry after 4.258795447s: waiting for machine to come up
	I0829 18:27:58.345916   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:58.346295   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has current primary IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:58.346321   31894 main.go:141] libmachine: (ha-782425-m03) Found IP for machine: 192.168.39.220
	I0829 18:27:58.346337   31894 main.go:141] libmachine: (ha-782425-m03) Reserving static IP address...
	I0829 18:27:58.346666   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find host DHCP lease matching {name: "ha-782425-m03", mac: "52:54:00:b5:78:f3", ip: "192.168.39.220"} in network mk-ha-782425
	I0829 18:27:58.418121   31894 main.go:141] libmachine: (ha-782425-m03) Reserved static IP address: 192.168.39.220
	I0829 18:27:58.418146   31894 main.go:141] libmachine: (ha-782425-m03) Waiting for SSH to be available...
	I0829 18:27:58.418187   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Getting to WaitForSSH function...
	I0829 18:27:58.420469   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:58.420795   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425
	I0829 18:27:58.420829   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find defined IP address of network mk-ha-782425 interface with MAC address 52:54:00:b5:78:f3
	I0829 18:27:58.420969   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Using SSH client type: external
	I0829 18:27:58.420993   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa (-rw-------)
	I0829 18:27:58.421022   31894 main.go:141] libmachine: (ha-782425-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:27:58.421036   31894 main.go:141] libmachine: (ha-782425-m03) DBG | About to run SSH command:
	I0829 18:27:58.421049   31894 main.go:141] libmachine: (ha-782425-m03) DBG | exit 0
	I0829 18:27:58.424711   31894 main.go:141] libmachine: (ha-782425-m03) DBG | SSH cmd err, output: exit status 255: 
	I0829 18:27:58.424735   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0829 18:27:58.424745   31894 main.go:141] libmachine: (ha-782425-m03) DBG | command : exit 0
	I0829 18:27:58.424756   31894 main.go:141] libmachine: (ha-782425-m03) DBG | err     : exit status 255
	I0829 18:27:58.424765   31894 main.go:141] libmachine: (ha-782425-m03) DBG | output  : 
	I0829 18:28:01.426845   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Getting to WaitForSSH function...
	I0829 18:28:01.429210   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.429521   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.429560   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.429686   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Using SSH client type: external
	I0829 18:28:01.429710   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa (-rw-------)
	I0829 18:28:01.429747   31894 main.go:141] libmachine: (ha-782425-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:28:01.429765   31894 main.go:141] libmachine: (ha-782425-m03) DBG | About to run SSH command:
	I0829 18:28:01.429778   31894 main.go:141] libmachine: (ha-782425-m03) DBG | exit 0
	I0829 18:28:01.553920   31894 main.go:141] libmachine: (ha-782425-m03) DBG | SSH cmd err, output: <nil>: 
	I0829 18:28:01.554185   31894 main.go:141] libmachine: (ha-782425-m03) KVM machine creation complete!
	I0829 18:28:01.554539   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetConfigRaw
	I0829 18:28:01.555039   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:01.555233   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:01.555399   31894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:28:01.555414   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetState
	I0829 18:28:01.556736   31894 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:28:01.556749   31894 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:28:01.556754   31894 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:28:01.556760   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:01.558787   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.559126   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.559151   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.559276   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:01.559425   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.559571   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.559705   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:01.559846   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:28:01.560088   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0829 18:28:01.560103   31894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:28:01.657214   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:28:01.657236   31894 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:28:01.657246   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:01.660034   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.660406   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.660434   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.660580   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:01.660751   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.660914   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.661076   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:01.661222   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:28:01.661384   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0829 18:28:01.661394   31894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:28:01.758625   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:28:01.758708   31894 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:28:01.758722   31894 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:28:01.758733   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetMachineName
	I0829 18:28:01.758977   31894 buildroot.go:166] provisioning hostname "ha-782425-m03"
	I0829 18:28:01.758997   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetMachineName
	I0829 18:28:01.759168   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:01.761812   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.762222   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.762244   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.762404   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:01.762553   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.762702   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.762832   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:01.762990   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:28:01.763141   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0829 18:28:01.763152   31894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-782425-m03 && echo "ha-782425-m03" | sudo tee /etc/hostname
	I0829 18:28:01.871627   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-782425-m03
	
	I0829 18:28:01.871658   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:01.874406   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.874839   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.874872   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.875012   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:01.875212   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.875367   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.875528   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:01.875723   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:28:01.875921   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0829 18:28:01.875943   31894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-782425-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-782425-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-782425-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:28:01.978192   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:28:01.978221   31894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:28:01.978240   31894 buildroot.go:174] setting up certificates
	I0829 18:28:01.978252   31894 provision.go:84] configureAuth start
	I0829 18:28:01.978263   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetMachineName
	I0829 18:28:01.978529   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:28:01.981151   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.981561   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.981582   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.981777   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:01.983874   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.984210   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.984236   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.984374   31894 provision.go:143] copyHostCerts
	I0829 18:28:01.984406   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:28:01.984452   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 18:28:01.984463   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:28:01.984532   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:28:01.984635   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:28:01.984660   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 18:28:01.984670   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:28:01.984708   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:28:01.984770   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:28:01.984797   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 18:28:01.984805   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:28:01.984836   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:28:01.984919   31894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.ha-782425-m03 san=[127.0.0.1 192.168.39.220 ha-782425-m03 localhost minikube]
	I0829 18:28:02.246243   31894 provision.go:177] copyRemoteCerts
	I0829 18:28:02.246297   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:28:02.246376   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:02.248992   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.249348   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.249377   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.249505   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:02.249710   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.249845   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:02.249993   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:28:02.327997   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 18:28:02.328103   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:28:02.353504   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 18:28:02.353575   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:28:02.377505   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 18:28:02.377584   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:28:02.400633   31894 provision.go:87] duration metric: took 422.367175ms to configureAuth
	I0829 18:28:02.400665   31894 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:28:02.400854   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:28:02.400922   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:02.403375   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.403770   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.403799   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.403901   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:02.404140   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.404305   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.404443   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:02.404613   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:28:02.404822   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0829 18:28:02.404843   31894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:28:02.622069   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:28:02.622110   31894 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:28:02.622121   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetURL
	I0829 18:28:02.623387   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Using libvirt version 6000000
	I0829 18:28:02.625466   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.625803   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.625823   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.626006   31894 main.go:141] libmachine: Docker is up and running!
	I0829 18:28:02.626025   31894 main.go:141] libmachine: Reticulating splines...
	I0829 18:28:02.626032   31894 client.go:171] duration metric: took 24.420632742s to LocalClient.Create
	I0829 18:28:02.626053   31894 start.go:167] duration metric: took 24.420688809s to libmachine.API.Create "ha-782425"
	I0829 18:28:02.626062   31894 start.go:293] postStartSetup for "ha-782425-m03" (driver="kvm2")
	I0829 18:28:02.626070   31894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:28:02.626104   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:02.626333   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:28:02.626366   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:02.628445   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.628766   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.628791   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.628922   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:02.629087   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.629219   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:02.629331   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:28:02.708657   31894 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:28:02.712593   31894 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:28:02.712615   31894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 18:28:02.712673   31894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 18:28:02.712741   31894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 18:28:02.712750   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /etc/ssl/certs/202592.pem
	I0829 18:28:02.712826   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 18:28:02.722183   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:28:02.746168   31894 start.go:296] duration metric: took 120.091913ms for postStartSetup
	I0829 18:28:02.746237   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetConfigRaw
	I0829 18:28:02.746836   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:28:02.749600   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.750012   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.750042   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.750378   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:28:02.750629   31894 start.go:128] duration metric: took 24.563428836s to createHost
	I0829 18:28:02.750658   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:02.753152   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.753505   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.753533   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.753721   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:02.753906   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.754061   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.754209   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:02.754364   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:28:02.754538   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0829 18:28:02.754550   31894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:28:02.850607   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724956082.826343446
	
	I0829 18:28:02.850631   31894 fix.go:216] guest clock: 1724956082.826343446
	I0829 18:28:02.850641   31894 fix.go:229] Guest: 2024-08-29 18:28:02.826343446 +0000 UTC Remote: 2024-08-29 18:28:02.750643528 +0000 UTC m=+144.918639060 (delta=75.699918ms)
	I0829 18:28:02.850670   31894 fix.go:200] guest clock delta is within tolerance: 75.699918ms
	I0829 18:28:02.850681   31894 start.go:83] releasing machines lock for "ha-782425-m03", held for 24.663603239s
	I0829 18:28:02.850710   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:02.851009   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:28:02.854120   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.854546   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.854573   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.856517   31894 out.go:177] * Found network options:
	I0829 18:28:02.857741   31894 out.go:177]   - NO_PROXY=192.168.39.39,192.168.39.253
	W0829 18:28:02.859050   31894 proxy.go:119] fail to check proxy env: Error ip not in block
	W0829 18:28:02.859077   31894 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 18:28:02.859094   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:02.859605   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:02.859791   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:02.859876   31894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:28:02.859917   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	W0829 18:28:02.859982   31894 proxy.go:119] fail to check proxy env: Error ip not in block
	W0829 18:28:02.860005   31894 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 18:28:02.860062   31894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:28:02.860082   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:02.862414   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.862781   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.862807   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.862827   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.862998   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:02.863155   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.863292   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.863330   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:02.863394   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.863455   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:02.863511   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:28:02.863606   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.863722   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:02.863855   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:28:03.087651   31894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:28:03.094619   31894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:28:03.094686   31894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:28:03.109806   31894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:28:03.109831   31894 start.go:495] detecting cgroup driver to use...
	I0829 18:28:03.109913   31894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:28:03.126690   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:28:03.142265   31894 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:28:03.142319   31894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:28:03.156210   31894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:28:03.169742   31894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:28:03.278641   31894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:28:03.431999   31894 docker.go:233] disabling docker service ...
	I0829 18:28:03.432062   31894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:28:03.445416   31894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:28:03.457051   31894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:28:03.577740   31894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:28:03.692002   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:28:03.706207   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:28:03.723020   31894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:28:03.723077   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.734591   31894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:28:03.734655   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.744783   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.754403   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.766763   31894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:28:03.778511   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.788947   31894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.805748   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.815930   31894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:28:03.824744   31894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 18:28:03.824798   31894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 18:28:03.837350   31894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:28:03.845996   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:28:03.957638   31894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:28:04.044780   31894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:28:04.044862   31894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:28:04.049116   31894 start.go:563] Will wait 60s for crictl version
	I0829 18:28:04.049174   31894 ssh_runner.go:195] Run: which crictl
	I0829 18:28:04.052467   31894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:28:04.091186   31894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:28:04.091265   31894 ssh_runner.go:195] Run: crio --version
	I0829 18:28:04.122455   31894 ssh_runner.go:195] Run: crio --version
	I0829 18:28:04.152483   31894 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:28:04.153744   31894 out.go:177]   - env NO_PROXY=192.168.39.39
	I0829 18:28:04.154982   31894 out.go:177]   - env NO_PROXY=192.168.39.39,192.168.39.253
	I0829 18:28:04.156108   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:28:04.159054   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:04.159540   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:04.159576   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:04.159747   31894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:28:04.163668   31894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:28:04.175622   31894 mustload.go:65] Loading cluster: ha-782425
	I0829 18:28:04.175901   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:28:04.176177   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:28:04.176211   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:28:04.191663   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
	I0829 18:28:04.192143   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:28:04.192585   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:28:04.192621   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:28:04.193002   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:28:04.193191   31894 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:28:04.194781   31894 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:28:04.195118   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:28:04.195161   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:28:04.209854   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46771
	I0829 18:28:04.210268   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:28:04.210790   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:28:04.210810   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:28:04.211200   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:28:04.211506   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:28:04.211690   31894 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425 for IP: 192.168.39.220
	I0829 18:28:04.211704   31894 certs.go:194] generating shared ca certs ...
	I0829 18:28:04.211717   31894 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:28:04.211836   31894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 18:28:04.211871   31894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 18:28:04.211880   31894 certs.go:256] generating profile certs ...
	I0829 18:28:04.211952   31894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key
	I0829 18:28:04.211975   31894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.dfe88847
	I0829 18:28:04.211989   31894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.dfe88847 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.39 192.168.39.253 192.168.39.220 192.168.39.254]
	I0829 18:28:04.348270   31894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.dfe88847 ...
	I0829 18:28:04.348307   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.dfe88847: {Name:mk14139edb6a62e8e4d43837fb216554daa427a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:28:04.348503   31894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.dfe88847 ...
	I0829 18:28:04.348520   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.dfe88847: {Name:mk7bfcdc5e7a3699a316207b281b7344bc61aee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:28:04.348624   31894 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.dfe88847 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt
	I0829 18:28:04.348793   31894 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.dfe88847 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key
	I0829 18:28:04.348965   31894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key
	I0829 18:28:04.348983   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 18:28:04.349001   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 18:28:04.349020   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 18:28:04.349060   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 18:28:04.349077   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 18:28:04.349091   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 18:28:04.349107   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 18:28:04.349124   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 18:28:04.349198   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 18:28:04.349241   31894 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 18:28:04.349254   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:28:04.349288   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:28:04.349320   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:28:04.349352   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 18:28:04.349406   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:28:04.349446   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:28:04.349466   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem -> /usr/share/ca-certificates/20259.pem
	I0829 18:28:04.349484   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /usr/share/ca-certificates/202592.pem
	I0829 18:28:04.349524   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:28:04.352479   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:28:04.352867   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:28:04.352896   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:28:04.353149   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:28:04.353352   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:28:04.353490   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:28:04.353617   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:28:04.430392   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0829 18:28:04.435394   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0829 18:28:04.446366   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0829 18:28:04.450926   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0829 18:28:04.460806   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0829 18:28:04.464472   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0829 18:28:04.483922   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0829 18:28:04.488739   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0829 18:28:04.500498   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0829 18:28:04.504343   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0829 18:28:04.513600   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0829 18:28:04.517538   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0829 18:28:04.527506   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:28:04.551092   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:28:04.572762   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:28:04.597717   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:28:04.619742   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0829 18:28:04.640777   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 18:28:04.662127   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:28:04.683766   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:28:04.706856   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:28:04.731433   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 18:28:04.753760   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 18:28:04.776862   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0829 18:28:04.792923   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0829 18:28:04.808730   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0829 18:28:04.824668   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0829 18:28:04.840759   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0829 18:28:04.856003   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0829 18:28:04.870873   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0829 18:28:04.886451   31894 ssh_runner.go:195] Run: openssl version
	I0829 18:28:04.891673   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:28:04.902385   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:28:04.907147   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:28:04.907207   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:28:04.912620   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:28:04.922151   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 18:28:04.932091   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 18:28:04.936231   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 18:28:04.936287   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 18:28:04.941423   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 18:28:04.951794   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 18:28:04.961377   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 18:28:04.965411   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 18:28:04.965469   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 18:28:04.970625   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 18:28:04.980209   31894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:28:04.983763   31894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:28:04.983808   31894 kubeadm.go:934] updating node {m03 192.168.39.220 8443 v1.31.0 crio true true} ...
	I0829 18:28:04.983895   31894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-782425-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:28:04.983923   31894 kube-vip.go:115] generating kube-vip config ...
	I0829 18:28:04.983958   31894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 18:28:05.000225   31894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 18:28:05.000296   31894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0829 18:28:05.000356   31894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:28:05.009427   31894 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0829 18:28:05.009485   31894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0829 18:28:05.018082   31894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0829 18:28:05.018112   31894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0829 18:28:05.018082   31894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0829 18:28:05.018126   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 18:28:05.018155   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:28:05.018159   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 18:28:05.018217   31894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 18:28:05.018252   31894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 18:28:05.035607   31894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0829 18:28:05.035616   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 18:28:05.035664   31894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0829 18:28:05.035689   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0829 18:28:05.035731   31894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 18:28:05.035651   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0829 18:28:05.066515   31894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0829 18:28:05.066546   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0829 18:28:05.866110   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0829 18:28:05.874648   31894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0829 18:28:05.890771   31894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:28:05.905587   31894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0829 18:28:05.920867   31894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 18:28:05.924680   31894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:28:05.935968   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:28:06.040663   31894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:28:06.056566   31894 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:28:06.056969   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:28:06.057017   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:28:06.072764   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40249
	I0829 18:28:06.073174   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:28:06.073647   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:28:06.073669   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:28:06.073958   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:28:06.074168   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:28:06.074330   31894 start.go:317] joinCluster: &{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:28:06.074492   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0829 18:28:06.074512   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:28:06.077448   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:28:06.077890   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:28:06.077917   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:28:06.078021   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:28:06.078193   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:28:06.078373   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:28:06.078524   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:28:06.229649   31894 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:28:06.229711   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9l1oah.336h28y6daulw1a3 --discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-782425-m03 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443"
	I0829 18:28:29.021848   31894 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9l1oah.336h28y6daulw1a3 --discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-782425-m03 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443": (22.792103025s)
	I0829 18:28:29.021899   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0829 18:28:29.689851   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-782425-m03 minikube.k8s.io/updated_at=2024_08_29T18_28_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=ha-782425 minikube.k8s.io/primary=false
	I0829 18:28:29.817880   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-782425-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0829 18:28:29.937397   31894 start.go:319] duration metric: took 23.863062158s to joinCluster
	I0829 18:28:29.937564   31894 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:28:29.937913   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:28:29.938932   31894 out.go:177] * Verifying Kubernetes components...
	I0829 18:28:29.940500   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:28:30.196095   31894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:28:30.220231   31894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:28:30.220593   31894 kapi.go:59] client config for ha-782425: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.crt", KeyFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key", CAFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0829 18:28:30.220689   31894 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.39:8443
	I0829 18:28:30.221049   31894 node_ready.go:35] waiting up to 6m0s for node "ha-782425-m03" to be "Ready" ...
	I0829 18:28:30.221187   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:30.221200   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:30.221211   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:30.221218   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:30.224755   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:30.721330   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:30.721355   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:30.721367   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:30.721373   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:30.728584   31894 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0829 18:28:31.221367   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:31.221389   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:31.221401   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:31.221405   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:31.224755   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:31.721799   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:31.721824   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:31.721831   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:31.721835   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:31.725200   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:32.222133   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:32.222153   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:32.222161   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:32.222165   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:32.225507   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:32.225929   31894 node_ready.go:53] node "ha-782425-m03" has status "Ready":"False"
	I0829 18:28:32.721309   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:32.721334   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:32.721345   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:32.721351   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:32.725144   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:33.221227   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:33.221250   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:33.221262   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:33.221266   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:33.229418   31894 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0829 18:28:33.721432   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:33.721451   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:33.721457   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:33.721461   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:33.724816   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:34.221757   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:34.221781   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:34.221788   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:34.221792   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:34.224883   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:34.721339   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:34.721362   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:34.721373   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:34.721379   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:34.725423   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:28:34.726183   31894 node_ready.go:53] node "ha-782425-m03" has status "Ready":"False"
	I0829 18:28:35.221535   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:35.221557   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:35.221567   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:35.221578   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:35.224396   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:35.721928   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:35.721952   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:35.721961   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:35.721965   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:35.725715   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:36.222108   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:36.222135   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:36.222144   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:36.222151   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:36.225212   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:36.722020   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:36.722041   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:36.722049   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:36.722052   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:36.725279   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:37.222211   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:37.222234   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:37.222242   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:37.222247   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:37.225639   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:37.226238   31894 node_ready.go:53] node "ha-782425-m03" has status "Ready":"False"
	I0829 18:28:37.721548   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:37.721574   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:37.721587   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:37.721595   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:37.726891   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:28:38.221245   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:38.221272   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:38.221283   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:38.221288   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:38.224980   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:38.722210   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:38.722232   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:38.722240   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:38.722243   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:38.725861   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:39.221264   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:39.221285   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:39.221297   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:39.221302   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:39.224442   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:39.721756   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:39.721778   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:39.721785   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:39.721789   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:39.725432   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:39.726047   31894 node_ready.go:53] node "ha-782425-m03" has status "Ready":"False"
	I0829 18:28:40.221412   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:40.221436   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:40.221446   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:40.221453   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:40.224989   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:40.721984   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:40.722006   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:40.722014   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:40.722018   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:40.725151   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:41.221578   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:41.221601   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:41.221609   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:41.221612   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:41.224550   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:41.721614   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:41.721635   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:41.721646   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:41.721651   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:41.724956   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:42.221745   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:42.221772   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:42.221785   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:42.221791   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:42.224724   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:42.225407   31894 node_ready.go:53] node "ha-782425-m03" has status "Ready":"False"
	I0829 18:28:42.722270   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:42.722294   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:42.722302   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:42.722307   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:42.725463   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:43.221446   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:43.221466   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:43.221474   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:43.221478   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:43.224544   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:43.721514   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:43.721540   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:43.721549   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:43.721553   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:43.724824   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:44.221541   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:44.221563   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:44.221573   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:44.221579   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:44.225820   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:28:44.226444   31894 node_ready.go:53] node "ha-782425-m03" has status "Ready":"False"
	I0829 18:28:44.722232   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:44.722256   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:44.722266   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:44.722273   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:44.726293   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:28:45.221702   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:45.221724   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:45.221734   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:45.221742   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:45.225230   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:45.722155   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:45.722177   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:45.722185   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:45.722189   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:45.725813   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:46.221239   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:46.221262   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.221270   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.221276   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.225170   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:46.721677   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:46.721705   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.721715   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.721723   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.730104   31894 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0829 18:28:46.730837   31894 node_ready.go:49] node "ha-782425-m03" has status "Ready":"True"
	I0829 18:28:46.730866   31894 node_ready.go:38] duration metric: took 16.509796396s for node "ha-782425-m03" to be "Ready" ...
	I0829 18:28:46.730877   31894 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:28:46.730975   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:28:46.730989   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.730999   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.731003   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.743081   31894 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0829 18:28:46.751794   31894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nw2x2" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.751909   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-nw2x2
	I0829 18:28:46.751921   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.751931   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.751943   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.757395   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:28:46.760165   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:46.760186   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.760196   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.760200   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.765275   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:28:46.765758   31894 pod_ready.go:93] pod "coredns-6f6b679f8f-nw2x2" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:46.765776   31894 pod_ready.go:82] duration metric: took 13.947729ms for pod "coredns-6f6b679f8f-nw2x2" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.765785   31894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qhxm5" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.765836   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qhxm5
	I0829 18:28:46.765845   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.765852   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.765857   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.769497   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:46.770133   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:46.770147   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.770154   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.770158   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.773596   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:46.774413   31894 pod_ready.go:93] pod "coredns-6f6b679f8f-qhxm5" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:46.774431   31894 pod_ready.go:82] duration metric: took 8.64041ms for pod "coredns-6f6b679f8f-qhxm5" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.774440   31894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.774491   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/etcd-ha-782425
	I0829 18:28:46.774498   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.774505   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.774511   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.777301   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:46.777927   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:46.777946   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.777958   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.777963   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.780612   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:46.781398   31894 pod_ready.go:93] pod "etcd-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:46.781418   31894 pod_ready.go:82] duration metric: took 6.971235ms for pod "etcd-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.781430   31894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.781492   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/etcd-ha-782425-m02
	I0829 18:28:46.781502   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.781512   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.781521   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.784465   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:46.785348   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:46.785368   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.785377   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.785383   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.788415   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:46.788909   31894 pod_ready.go:93] pod "etcd-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:46.788930   31894 pod_ready.go:82] duration metric: took 7.491319ms for pod "etcd-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.788941   31894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.922255   31894 request.go:632] Waited for 133.262473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/etcd-ha-782425-m03
	I0829 18:28:46.922315   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/etcd-ha-782425-m03
	I0829 18:28:46.922320   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.922327   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.922332   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.925911   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:47.121880   31894 request.go:632] Waited for 195.274268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:47.121948   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:47.121957   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:47.121964   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:47.121970   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:47.126052   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:28:47.126569   31894 pod_ready.go:93] pod "etcd-ha-782425-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:47.126587   31894 pod_ready.go:82] duration metric: took 337.639932ms for pod "etcd-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:47.126610   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:47.322691   31894 request.go:632] Waited for 196.016729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425
	I0829 18:28:47.322764   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425
	I0829 18:28:47.322770   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:47.322777   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:47.322781   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:47.326137   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:47.522154   31894 request.go:632] Waited for 195.372895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:47.522217   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:47.522225   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:47.522236   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:47.522244   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:47.525276   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:47.525822   31894 pod_ready.go:93] pod "kube-apiserver-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:47.525841   31894 pod_ready.go:82] duration metric: took 399.222875ms for pod "kube-apiserver-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:47.525853   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:47.721931   31894 request.go:632] Waited for 196.002454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425-m02
	I0829 18:28:47.721989   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425-m02
	I0829 18:28:47.721996   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:47.722010   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:47.722019   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:47.726474   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:28:47.921944   31894 request.go:632] Waited for 194.787802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:47.921998   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:47.922004   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:47.922011   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:47.922015   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:47.925279   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:47.925797   31894 pod_ready.go:93] pod "kube-apiserver-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:47.925815   31894 pod_ready.go:82] duration metric: took 399.954449ms for pod "kube-apiserver-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:47.925825   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:48.122332   31894 request.go:632] Waited for 196.413935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425-m03
	I0829 18:28:48.122401   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425-m03
	I0829 18:28:48.122407   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:48.122417   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:48.122423   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:48.125290   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:48.322178   31894 request.go:632] Waited for 196.180445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:48.322242   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:48.322247   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:48.322253   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:48.322257   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:48.325601   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:48.326025   31894 pod_ready.go:93] pod "kube-apiserver-ha-782425-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:48.326045   31894 pod_ready.go:82] duration metric: took 400.213709ms for pod "kube-apiserver-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:48.326055   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:48.522038   31894 request.go:632] Waited for 195.915787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425
	I0829 18:28:48.522130   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425
	I0829 18:28:48.522137   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:48.522144   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:48.522147   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:48.525256   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:48.722392   31894 request.go:632] Waited for 196.381557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:48.722472   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:48.722477   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:48.722485   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:48.722490   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:48.725847   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:48.726715   31894 pod_ready.go:93] pod "kube-controller-manager-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:48.726733   31894 pod_ready.go:82] duration metric: took 400.672433ms for pod "kube-controller-manager-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:48.726743   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:48.921779   31894 request.go:632] Waited for 194.971702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425-m02
	I0829 18:28:48.921853   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425-m02
	I0829 18:28:48.921859   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:48.921866   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:48.921873   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:48.925541   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:49.122504   31894 request.go:632] Waited for 196.304236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:49.122631   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:49.122653   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:49.122661   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:49.122665   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:49.125446   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:49.126172   31894 pod_ready.go:93] pod "kube-controller-manager-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:49.126197   31894 pod_ready.go:82] duration metric: took 399.447536ms for pod "kube-controller-manager-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:49.126214   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:49.321919   31894 request.go:632] Waited for 195.623601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425-m03
	I0829 18:28:49.321973   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425-m03
	I0829 18:28:49.321978   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:49.321985   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:49.321989   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:49.325493   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:49.522516   31894 request.go:632] Waited for 196.379616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:49.522587   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:49.522592   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:49.522600   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:49.522604   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:49.525854   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:49.526428   31894 pod_ready.go:93] pod "kube-controller-manager-ha-782425-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:49.526448   31894 pod_ready.go:82] duration metric: took 400.224639ms for pod "kube-controller-manager-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:49.526458   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5k8xr" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:49.721698   31894 request.go:632] Waited for 195.179732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5k8xr
	I0829 18:28:49.721776   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5k8xr
	I0829 18:28:49.721782   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:49.721789   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:49.721793   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:49.725248   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:49.922356   31894 request.go:632] Waited for 196.3754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:49.922406   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:49.922411   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:49.922419   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:49.922422   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:49.925654   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:49.926219   31894 pod_ready.go:93] pod "kube-proxy-5k8xr" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:49.926239   31894 pod_ready.go:82] duration metric: took 399.774718ms for pod "kube-proxy-5k8xr" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:49.926249   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d5kbx" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:50.122266   31894 request.go:632] Waited for 195.95942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5kbx
	I0829 18:28:50.122353   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5kbx
	I0829 18:28:50.122364   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:50.122375   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:50.122385   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:50.125962   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:50.322235   31894 request.go:632] Waited for 195.375684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:50.322320   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:50.322327   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:50.322334   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:50.322339   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:50.325864   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:50.326461   31894 pod_ready.go:93] pod "kube-proxy-d5kbx" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:50.326481   31894 pod_ready.go:82] duration metric: took 400.225563ms for pod "kube-proxy-d5kbx" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:50.326493   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vzss9" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:50.522571   31894 request.go:632] Waited for 195.985083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vzss9
	I0829 18:28:50.522635   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vzss9
	I0829 18:28:50.522643   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:50.522654   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:50.522661   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:50.525714   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:50.721758   31894 request.go:632] Waited for 195.287107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:50.721811   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:50.721818   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:50.721828   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:50.721834   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:50.725171   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:50.725797   31894 pod_ready.go:93] pod "kube-proxy-vzss9" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:50.725822   31894 pod_ready.go:82] duration metric: took 399.321762ms for pod "kube-proxy-vzss9" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:50.725835   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:50.921909   31894 request.go:632] Waited for 195.989287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425
	I0829 18:28:50.921974   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425
	I0829 18:28:50.921981   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:50.921992   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:50.922004   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:50.925258   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:51.122136   31894 request.go:632] Waited for 196.22971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:51.122197   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:51.122203   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:51.122221   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:51.122229   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:51.125766   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:51.126286   31894 pod_ready.go:93] pod "kube-scheduler-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:51.126307   31894 pod_ready.go:82] duration metric: took 400.464622ms for pod "kube-scheduler-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:51.126324   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:51.322314   31894 request.go:632] Waited for 195.931418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425-m02
	I0829 18:28:51.322368   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425-m02
	I0829 18:28:51.322374   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:51.322380   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:51.322384   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:51.325832   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:51.522729   31894 request.go:632] Waited for 196.285365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:51.522777   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:51.522783   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:51.522789   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:51.522793   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:51.526109   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:51.526621   31894 pod_ready.go:93] pod "kube-scheduler-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:51.526642   31894 pod_ready.go:82] duration metric: took 400.311007ms for pod "kube-scheduler-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:51.526657   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:51.722644   31894 request.go:632] Waited for 195.923513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425-m03
	I0829 18:28:51.722709   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425-m03
	I0829 18:28:51.722715   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:51.722722   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:51.722726   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:51.726006   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:51.922120   31894 request.go:632] Waited for 195.361975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:51.922187   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:51.922195   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:51.922202   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:51.922206   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:51.925443   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:51.925926   31894 pod_ready.go:93] pod "kube-scheduler-ha-782425-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:51.925944   31894 pod_ready.go:82] duration metric: took 399.278435ms for pod "kube-scheduler-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:51.925954   31894 pod_ready.go:39] duration metric: took 5.195065407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:28:51.925970   31894 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:28:51.926017   31894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:28:51.941439   31894 api_server.go:72] duration metric: took 22.003829538s to wait for apiserver process to appear ...
	I0829 18:28:51.941465   31894 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:28:51.941486   31894 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I0829 18:28:51.945619   31894 api_server.go:279] https://192.168.39.39:8443/healthz returned 200:
	ok
	I0829 18:28:51.945703   31894 round_trippers.go:463] GET https://192.168.39.39:8443/version
	I0829 18:28:51.945714   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:51.945724   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:51.945732   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:51.946661   31894 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0829 18:28:51.946718   31894 api_server.go:141] control plane version: v1.31.0
	I0829 18:28:51.946733   31894 api_server.go:131] duration metric: took 5.260491ms to wait for apiserver health ...
	I0829 18:28:51.946741   31894 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:28:52.122165   31894 request.go:632] Waited for 175.351343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:28:52.122254   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:28:52.122262   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:52.122273   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:52.122281   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:52.127609   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:28:52.135153   31894 system_pods.go:59] 24 kube-system pods found
	I0829 18:28:52.135192   31894 system_pods.go:61] "coredns-6f6b679f8f-nw2x2" [ab54ce43-4bd7-43ff-aad9-5cac2beb035b] Running
	I0829 18:28:52.135197   31894 system_pods.go:61] "coredns-6f6b679f8f-qhxm5" [286ec4e7-9401-4bdd-b8b2-86f00f130fc2] Running
	I0829 18:28:52.135200   31894 system_pods.go:61] "etcd-ha-782425" [743c3f2f-c86c-4f74-a7ef-9c95c0af0857] Running
	I0829 18:28:52.135203   31894 system_pods.go:61] "etcd-ha-782425-m02" [e70a5056-2675-48cf-8275-a630a1086c60] Running
	I0829 18:28:52.135206   31894 system_pods.go:61] "etcd-ha-782425-m03" [1b112206-4321-4ab1-a4d1-7e62cd911954] Running
	I0829 18:28:52.135208   31894 system_pods.go:61] "kindnet-7l5kn" [1a9ac71b-acaf-4ac9-b330-943525137d23] Running
	I0829 18:28:52.135211   31894 system_pods.go:61] "kindnet-kw2zk" [61a4cb33-47d5-4dd2-8711-d2524cf1133c] Running
	I0829 18:28:52.135214   31894 system_pods.go:61] "kindnet-m5jqn" [4df3ca7e-7d2e-414c-8d1f-77ac7ab484fb] Running
	I0829 18:28:52.135217   31894 system_pods.go:61] "kube-apiserver-ha-782425" [b51e7db3-35e5-4e46-aeb4-9e98bfecd2a3] Running
	I0829 18:28:52.135221   31894 system_pods.go:61] "kube-apiserver-ha-782425-m02" [c1faa8f8-b5fd-41e7-bee3-dcdd6f4f06cc] Running
	I0829 18:28:52.135224   31894 system_pods.go:61] "kube-apiserver-ha-782425-m03" [f20451ab-aa25-4414-afba-727618ae119b] Running
	I0829 18:28:52.135233   31894 system_pods.go:61] "kube-controller-manager-ha-782425" [008c32bf-b8f4-4cbe-a550-3820a3980f8f] Running
	I0829 18:28:52.135240   31894 system_pods.go:61] "kube-controller-manager-ha-782425-m02" [fcfc6d1d-ef6d-4b04-a86f-08d92de0883e] Running
	I0829 18:28:52.135243   31894 system_pods.go:61] "kube-controller-manager-ha-782425-m03" [38b82fbd-248d-4b1f-ae8a-284d2fb9cf0b] Running
	I0829 18:28:52.135245   31894 system_pods.go:61] "kube-proxy-5k8xr" [d07a092c-2a97-4bc5-ba9e-f0bf1022df8e] Running
	I0829 18:28:52.135248   31894 system_pods.go:61] "kube-proxy-d5kbx" [9033b7fd-0da5-4558-8c52-0ba06a7a4704] Running
	I0829 18:28:52.135251   31894 system_pods.go:61] "kube-proxy-vzss9" [de587dda-283e-4c9e-93e6-0e035656bf2b] Running
	I0829 18:28:52.135255   31894 system_pods.go:61] "kube-scheduler-ha-782425" [72ba768c-61dd-4c95-a640-cdc3782b6f4c] Running
	I0829 18:28:52.135258   31894 system_pods.go:61] "kube-scheduler-ha-782425-m02" [56fa0075-25e4-42b7-b7b1-1b6d55643fcd] Running
	I0829 18:28:52.135262   31894 system_pods.go:61] "kube-scheduler-ha-782425-m03" [7f68c7ca-ac7e-49ac-b0c7-e0a27c30349e] Running
	I0829 18:28:52.135265   31894 system_pods.go:61] "kube-vip-ha-782425" [83b3c3eb-b05b-47de-bc2a-ee1822b50b77] Running
	I0829 18:28:52.135270   31894 system_pods.go:61] "kube-vip-ha-782425-m02" [9655f7bc-ba21-4a7b-b223-18e52c655972] Running
	I0829 18:28:52.135272   31894 system_pods.go:61] "kube-vip-ha-782425-m03" [5472756b-a611-427c-9385-028188ba45de] Running
	I0829 18:28:52.135278   31894 system_pods.go:61] "storage-provisioner" [f41ebca1-035e-44b0-96a2-3aa1e794bc1f] Running
	I0829 18:28:52.135284   31894 system_pods.go:74] duration metric: took 188.537157ms to wait for pod list to return data ...
	I0829 18:28:52.135292   31894 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:28:52.322724   31894 request.go:632] Waited for 187.368856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/default/serviceaccounts
	I0829 18:28:52.322777   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/default/serviceaccounts
	I0829 18:28:52.322782   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:52.322790   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:52.322795   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:52.328425   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:28:52.328552   31894 default_sa.go:45] found service account: "default"
	I0829 18:28:52.328572   31894 default_sa.go:55] duration metric: took 193.269199ms for default service account to be created ...
	I0829 18:28:52.328581   31894 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:28:52.521761   31894 request.go:632] Waited for 193.120158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:28:52.521843   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:28:52.521850   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:52.521857   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:52.521864   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:52.527155   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:28:52.536939   31894 system_pods.go:86] 24 kube-system pods found
	I0829 18:28:52.536977   31894 system_pods.go:89] "coredns-6f6b679f8f-nw2x2" [ab54ce43-4bd7-43ff-aad9-5cac2beb035b] Running
	I0829 18:28:52.536985   31894 system_pods.go:89] "coredns-6f6b679f8f-qhxm5" [286ec4e7-9401-4bdd-b8b2-86f00f130fc2] Running
	I0829 18:28:52.536991   31894 system_pods.go:89] "etcd-ha-782425" [743c3f2f-c86c-4f74-a7ef-9c95c0af0857] Running
	I0829 18:28:52.536997   31894 system_pods.go:89] "etcd-ha-782425-m02" [e70a5056-2675-48cf-8275-a630a1086c60] Running
	I0829 18:28:52.537003   31894 system_pods.go:89] "etcd-ha-782425-m03" [1b112206-4321-4ab1-a4d1-7e62cd911954] Running
	I0829 18:28:52.537009   31894 system_pods.go:89] "kindnet-7l5kn" [1a9ac71b-acaf-4ac9-b330-943525137d23] Running
	I0829 18:28:52.537014   31894 system_pods.go:89] "kindnet-kw2zk" [61a4cb33-47d5-4dd2-8711-d2524cf1133c] Running
	I0829 18:28:52.537019   31894 system_pods.go:89] "kindnet-m5jqn" [4df3ca7e-7d2e-414c-8d1f-77ac7ab484fb] Running
	I0829 18:28:52.537024   31894 system_pods.go:89] "kube-apiserver-ha-782425" [b51e7db3-35e5-4e46-aeb4-9e98bfecd2a3] Running
	I0829 18:28:52.537029   31894 system_pods.go:89] "kube-apiserver-ha-782425-m02" [c1faa8f8-b5fd-41e7-bee3-dcdd6f4f06cc] Running
	I0829 18:28:52.537035   31894 system_pods.go:89] "kube-apiserver-ha-782425-m03" [f20451ab-aa25-4414-afba-727618ae119b] Running
	I0829 18:28:52.537041   31894 system_pods.go:89] "kube-controller-manager-ha-782425" [008c32bf-b8f4-4cbe-a550-3820a3980f8f] Running
	I0829 18:28:52.537082   31894 system_pods.go:89] "kube-controller-manager-ha-782425-m02" [fcfc6d1d-ef6d-4b04-a86f-08d92de0883e] Running
	I0829 18:28:52.537094   31894 system_pods.go:89] "kube-controller-manager-ha-782425-m03" [38b82fbd-248d-4b1f-ae8a-284d2fb9cf0b] Running
	I0829 18:28:52.537100   31894 system_pods.go:89] "kube-proxy-5k8xr" [d07a092c-2a97-4bc5-ba9e-f0bf1022df8e] Running
	I0829 18:28:52.537106   31894 system_pods.go:89] "kube-proxy-d5kbx" [9033b7fd-0da5-4558-8c52-0ba06a7a4704] Running
	I0829 18:28:52.537116   31894 system_pods.go:89] "kube-proxy-vzss9" [de587dda-283e-4c9e-93e6-0e035656bf2b] Running
	I0829 18:28:52.537124   31894 system_pods.go:89] "kube-scheduler-ha-782425" [72ba768c-61dd-4c95-a640-cdc3782b6f4c] Running
	I0829 18:28:52.537133   31894 system_pods.go:89] "kube-scheduler-ha-782425-m02" [56fa0075-25e4-42b7-b7b1-1b6d55643fcd] Running
	I0829 18:28:52.537138   31894 system_pods.go:89] "kube-scheduler-ha-782425-m03" [7f68c7ca-ac7e-49ac-b0c7-e0a27c30349e] Running
	I0829 18:28:52.537147   31894 system_pods.go:89] "kube-vip-ha-782425" [83b3c3eb-b05b-47de-bc2a-ee1822b50b77] Running
	I0829 18:28:52.537155   31894 system_pods.go:89] "kube-vip-ha-782425-m02" [9655f7bc-ba21-4a7b-b223-18e52c655972] Running
	I0829 18:28:52.537160   31894 system_pods.go:89] "kube-vip-ha-782425-m03" [5472756b-a611-427c-9385-028188ba45de] Running
	I0829 18:28:52.537167   31894 system_pods.go:89] "storage-provisioner" [f41ebca1-035e-44b0-96a2-3aa1e794bc1f] Running
	I0829 18:28:52.537174   31894 system_pods.go:126] duration metric: took 208.587686ms to wait for k8s-apps to be running ...
	I0829 18:28:52.537185   31894 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:28:52.537239   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:28:52.552882   31894 system_svc.go:56] duration metric: took 15.686393ms WaitForService to wait for kubelet
	I0829 18:28:52.552921   31894 kubeadm.go:582] duration metric: took 22.61531535s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:28:52.552953   31894 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:28:52.722301   31894 request.go:632] Waited for 169.275812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes
	I0829 18:28:52.722389   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes
	I0829 18:28:52.722400   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:52.722410   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:52.722421   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:52.726377   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:52.727509   31894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:28:52.727542   31894 node_conditions.go:123] node cpu capacity is 2
	I0829 18:28:52.727575   31894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:28:52.727581   31894 node_conditions.go:123] node cpu capacity is 2
	I0829 18:28:52.727590   31894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:28:52.727602   31894 node_conditions.go:123] node cpu capacity is 2
	I0829 18:28:52.727612   31894 node_conditions.go:105] duration metric: took 174.653119ms to run NodePressure ...
	I0829 18:28:52.727628   31894 start.go:241] waiting for startup goroutines ...
	I0829 18:28:52.727654   31894 start.go:255] writing updated cluster config ...
	I0829 18:28:52.728027   31894 ssh_runner.go:195] Run: rm -f paused
	I0829 18:28:52.780476   31894 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:28:52.782625   31894 out.go:177] * Done! kubectl is now configured to use "ha-782425" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 18:32:31 ha-782425 crio[671]: time="2024-08-29 18:32:31.986302267Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956351986252354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46023006-b84d-46de-8542-4ec2bdc3b88f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:32:31 ha-782425 crio[671]: time="2024-08-29 18:32:31.987939593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f3c46e2-18d6-4365-9781-f05b26edac41 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:32:31 ha-782425 crio[671]: time="2024-08-29 18:32:31.987997764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f3c46e2-18d6-4365-9781-f05b26edac41 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:32:31 ha-782425 crio[671]: time="2024-08-29 18:32:31.988231979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956137320622555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84662d6e106199b21ed477f5a2886b295b043a6867485c365cfc10d478200160,PodSandboxId:8293780e1d6d4a1909809f02340a4b9cc62e32d7001d150d0addf9aeb78c49b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724955999524619318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999481874147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999444745885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4b
d7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724955987639286993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495598
4165837632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216684e1555951dcb1c3a39517bf4a8c25da68c22cb5dd013a12ce46d50ed3c4,PodSandboxId:6f11ab2a6fb7e7955643f60135f84a5af263d5fec7402aa76eb4fc4addc1adea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495597819
6173331,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785b0945a31435ed85f818ddb1964463,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724955973080680954,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724955973067436880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24877a3e0c79c9cda3862a8eec226d8ab981fed9522707a5e23bf3114832c434,PodSandboxId:633bf8a10344688b7780c2e84db6460da5bd182ad67296e33ac7186ef9c44dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724955973042670585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ef8a4b863ba396265906bfb135c4d78e0ec7bc4e4863880325735a054ce292,PodSandboxId:65d7a502881aee9e7eacf72e23843e0933e076edcb70634e71f902447d1d986b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724955972991457099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f3c46e2-18d6-4365-9781-f05b26edac41 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.025866934Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4feb7a22-c5ea-4a70-aea3-415bce3a6b1e name=/runtime.v1.RuntimeService/Version
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.025945307Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4feb7a22-c5ea-4a70-aea3-415bce3a6b1e name=/runtime.v1.RuntimeService/Version
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.027104876Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39163515-a83f-4629-91c9-4a805fa16515 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.027768143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956352027731820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39163515-a83f-4629-91c9-4a805fa16515 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.028499862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4c2fad3-3631-4869-997d-607cd1316859 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.028572242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4c2fad3-3631-4869-997d-607cd1316859 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.028946614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956137320622555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84662d6e106199b21ed477f5a2886b295b043a6867485c365cfc10d478200160,PodSandboxId:8293780e1d6d4a1909809f02340a4b9cc62e32d7001d150d0addf9aeb78c49b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724955999524619318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999481874147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999444745885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4b
d7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724955987639286993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495598
4165837632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216684e1555951dcb1c3a39517bf4a8c25da68c22cb5dd013a12ce46d50ed3c4,PodSandboxId:6f11ab2a6fb7e7955643f60135f84a5af263d5fec7402aa76eb4fc4addc1adea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495597819
6173331,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785b0945a31435ed85f818ddb1964463,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724955973080680954,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724955973067436880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24877a3e0c79c9cda3862a8eec226d8ab981fed9522707a5e23bf3114832c434,PodSandboxId:633bf8a10344688b7780c2e84db6460da5bd182ad67296e33ac7186ef9c44dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724955973042670585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ef8a4b863ba396265906bfb135c4d78e0ec7bc4e4863880325735a054ce292,PodSandboxId:65d7a502881aee9e7eacf72e23843e0933e076edcb70634e71f902447d1d986b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724955972991457099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4c2fad3-3631-4869-997d-607cd1316859 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.064103064Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df93f68d-9966-4ba5-9327-9d599e714526 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.064180790Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df93f68d-9966-4ba5-9327-9d599e714526 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.065453155Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9414bd4-751b-4ba9-8828-9b0e7ab04af0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.065946348Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956352065922912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9414bd4-751b-4ba9-8828-9b0e7ab04af0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.066449959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84a0f259-e153-4e87-a22f-ce0fd4c5925f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.066504154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84a0f259-e153-4e87-a22f-ce0fd4c5925f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.066843023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956137320622555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84662d6e106199b21ed477f5a2886b295b043a6867485c365cfc10d478200160,PodSandboxId:8293780e1d6d4a1909809f02340a4b9cc62e32d7001d150d0addf9aeb78c49b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724955999524619318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999481874147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999444745885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4b
d7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724955987639286993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495598
4165837632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216684e1555951dcb1c3a39517bf4a8c25da68c22cb5dd013a12ce46d50ed3c4,PodSandboxId:6f11ab2a6fb7e7955643f60135f84a5af263d5fec7402aa76eb4fc4addc1adea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495597819
6173331,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785b0945a31435ed85f818ddb1964463,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724955973080680954,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724955973067436880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24877a3e0c79c9cda3862a8eec226d8ab981fed9522707a5e23bf3114832c434,PodSandboxId:633bf8a10344688b7780c2e84db6460da5bd182ad67296e33ac7186ef9c44dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724955973042670585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ef8a4b863ba396265906bfb135c4d78e0ec7bc4e4863880325735a054ce292,PodSandboxId:65d7a502881aee9e7eacf72e23843e0933e076edcb70634e71f902447d1d986b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724955972991457099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84a0f259-e153-4e87-a22f-ce0fd4c5925f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.107895828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1367e6ef-9d85-4549-95f4-08014ebc0d05 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.108598088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1367e6ef-9d85-4549-95f4-08014ebc0d05 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.110893288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=daa9aba7-8823-4234-be6c-ef2efe48bb65 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.111331580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956352111308395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=daa9aba7-8823-4234-be6c-ef2efe48bb65 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.111851507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82faf42f-c238-4220-b238-7034a0227aee name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.111906713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82faf42f-c238-4220-b238-7034a0227aee name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:32:32 ha-782425 crio[671]: time="2024-08-29 18:32:32.112177824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956137320622555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84662d6e106199b21ed477f5a2886b295b043a6867485c365cfc10d478200160,PodSandboxId:8293780e1d6d4a1909809f02340a4b9cc62e32d7001d150d0addf9aeb78c49b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724955999524619318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999481874147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999444745885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4b
d7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724955987639286993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495598
4165837632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216684e1555951dcb1c3a39517bf4a8c25da68c22cb5dd013a12ce46d50ed3c4,PodSandboxId:6f11ab2a6fb7e7955643f60135f84a5af263d5fec7402aa76eb4fc4addc1adea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495597819
6173331,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785b0945a31435ed85f818ddb1964463,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724955973080680954,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724955973067436880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24877a3e0c79c9cda3862a8eec226d8ab981fed9522707a5e23bf3114832c434,PodSandboxId:633bf8a10344688b7780c2e84db6460da5bd182ad67296e33ac7186ef9c44dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724955973042670585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ef8a4b863ba396265906bfb135c4d78e0ec7bc4e4863880325735a054ce292,PodSandboxId:65d7a502881aee9e7eacf72e23843e0933e076edcb70634e71f902447d1d986b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724955972991457099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82faf42f-c238-4220-b238-7034a0227aee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	37662e4a563b6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   3fd1be2d5c605       busybox-7dff88458-vwgrt
	84662d6e10619       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   8293780e1d6d4       storage-provisioner
	409d0bb5b6b40       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   21f825f2fab4d       coredns-6f6b679f8f-qhxm5
	4bd32029a6efc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   a3d59948e98ac       coredns-6f6b679f8f-nw2x2
	23aa351e7d2aa       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   a4dea5e1c4a59       kindnet-7l5kn
	2b337a7249ae2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   b589b425f1e05       kube-proxy-d5kbx
	216684e155595       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   6f11ab2a6fb7e       kube-vip-ha-782425
	5077da1dd8cc1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   6bd7384dc0e18       etcd-ha-782425
	a97655078532a       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   8f3aec69eb919       kube-scheduler-ha-782425
	24877a3e0c79c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   633bf8a103446       kube-controller-manager-ha-782425
	33ef8a4b863ba       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   65d7a502881ae       kube-apiserver-ha-782425
	
	
	==> coredns [409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902] <==
	[INFO] 10.244.2.2:57473 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000200833s
	[INFO] 10.244.2.2:52567 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010875539s
	[INFO] 10.244.2.2:49428 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147198s
	[INFO] 10.244.1.2:41836 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001828177s
	[INFO] 10.244.1.2:37840 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088834s
	[INFO] 10.244.1.2:58950 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398175s
	[INFO] 10.244.1.2:44242 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081199s
	[INFO] 10.244.1.2:34411 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000240374s
	[INFO] 10.244.0.4:53126 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090758s
	[INFO] 10.244.0.4:52901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119888s
	[INFO] 10.244.0.4:37257 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017821s
	[INFO] 10.244.0.4:52278 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000240335s
	[INFO] 10.244.2.2:51997 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116371s
	[INFO] 10.244.2.2:50462 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000182689s
	[INFO] 10.244.1.2:35790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065854s
	[INFO] 10.244.0.4:56280 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165741s
	[INFO] 10.244.2.2:45436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113865s
	[INFO] 10.244.2.2:34308 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000419163s
	[INFO] 10.244.2.2:49859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112498s
	[INFO] 10.244.1.2:38106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212429s
	[INFO] 10.244.1.2:54743 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163094s
	[INFO] 10.244.1.2:54398 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014924s
	[INFO] 10.244.1.2:38833 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103377s
	[INFO] 10.244.0.4:55589 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206346s
	[INFO] 10.244.0.4:55224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098455s
	
	
	==> coredns [4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c] <==
	[INFO] 10.244.0.4:43640 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001476481s
	[INFO] 10.244.0.4:39791 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000051362s
	[INFO] 10.244.0.4:57306 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001427687s
	[INFO] 10.244.2.2:37045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125236s
	[INFO] 10.244.2.2:51775 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196255s
	[INFO] 10.244.2.2:37371 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123702s
	[INFO] 10.244.2.2:59027 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137207s
	[INFO] 10.244.1.2:42349 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121881s
	[INFO] 10.244.1.2:55845 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147799s
	[INFO] 10.244.1.2:50054 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077465s
	[INFO] 10.244.0.4:37394 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001939796s
	[INFO] 10.244.0.4:39167 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001349918s
	[INFO] 10.244.0.4:55247 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192001s
	[INFO] 10.244.0.4:50279 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056293s
	[INFO] 10.244.2.2:57566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010586s
	[INFO] 10.244.2.2:59408 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079146s
	[INFO] 10.244.1.2:58697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125072s
	[INFO] 10.244.1.2:39849 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011783s
	[INFO] 10.244.1.2:34464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086829s
	[INFO] 10.244.0.4:40575 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123993s
	[INFO] 10.244.0.4:53854 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077061s
	[INFO] 10.244.0.4:35333 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069139s
	[INFO] 10.244.2.2:47493 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133201s
	[INFO] 10.244.0.4:46944 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105838s
	[INFO] 10.244.0.4:56535 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148137s
	
	
	==> describe nodes <==
	Name:               ha-782425
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_26_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:26:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:32:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:29:25 +0000   Thu, 29 Aug 2024 18:26:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:29:25 +0000   Thu, 29 Aug 2024 18:26:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:29:25 +0000   Thu, 29 Aug 2024 18:26:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:29:25 +0000   Thu, 29 Aug 2024 18:26:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-782425
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 44ba55866afc4f4897f7d5cbfc46f2df
	  System UUID:                44ba5586-6afc-4f48-97f7-d5cbfc46f2df
	  Boot ID:                    e2df80f3-fc71-40f7-9f6a-86fc01e04fd1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vwgrt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 coredns-6f6b679f8f-nw2x2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m9s
	  kube-system                 coredns-6f6b679f8f-qhxm5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m9s
	  kube-system                 etcd-ha-782425                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m11s
	  kube-system                 kindnet-7l5kn                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m9s
	  kube-system                 kube-apiserver-ha-782425             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-controller-manager-ha-782425    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-proxy-d5kbx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-scheduler-ha-782425             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-vip-ha-782425                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m7s   kube-proxy       
	  Normal  Starting                 6m11s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m11s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m11s  kubelet          Node ha-782425 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s  kubelet          Node ha-782425 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m11s  kubelet          Node ha-782425 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m10s  node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Normal  NodeReady                5m54s  kubelet          Node ha-782425 status is now: NodeReady
	  Normal  RegisteredNode           5m11s  node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Normal  RegisteredNode           3m57s  node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	
	
	Name:               ha-782425-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T18_27_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:27:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:30:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 29 Aug 2024 18:29:15 +0000   Thu, 29 Aug 2024 18:30:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 29 Aug 2024 18:29:15 +0000   Thu, 29 Aug 2024 18:30:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 29 Aug 2024 18:29:15 +0000   Thu, 29 Aug 2024 18:30:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 29 Aug 2024 18:29:15 +0000   Thu, 29 Aug 2024 18:30:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-782425-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a438bc2a769444e18345ad0f28ed5c33
	  System UUID:                a438bc2a-7694-44e1-8345-ad0f28ed5c33
	  Boot ID:                    75f0bd0d-e15b-47c8-9ca6-c5bb7d2e1afc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rsqqv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-782425-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m18s
	  kube-system                 kindnet-kw2zk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m19s
	  kube-system                 kube-apiserver-ha-782425-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-controller-manager-ha-782425-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-5k8xr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-scheduler-ha-782425-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-vip-ha-782425-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node ha-782425-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node ha-782425-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x7 over 5m19s)  kubelet          Node ha-782425-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-782425-m02 status is now: NodeNotReady
	
	
	Name:               ha-782425-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T18_28_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:28:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:32:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:29:27 +0000   Thu, 29 Aug 2024 18:28:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:29:27 +0000   Thu, 29 Aug 2024 18:28:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:29:27 +0000   Thu, 29 Aug 2024 18:28:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:29:27 +0000   Thu, 29 Aug 2024 18:28:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-782425-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d557f0e4bd084f8d98554b9e0d482ef3
	  System UUID:                d557f0e4-bd08-4f8d-9855-4b9e0d482ef3
	  Boot ID:                    0b5a7eeb-45ed-43be-92d9-4127e0390a70
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-h8k94                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-782425-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m4s
	  kube-system                 kindnet-m5jqn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m6s
	  kube-system                 kube-apiserver-ha-782425-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-controller-manager-ha-782425-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-proxy-vzss9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-scheduler-ha-782425-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-vip-ha-782425-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node ha-782425-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node ha-782425-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node ha-782425-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-782425-m03 event: Registered Node ha-782425-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-782425-m03 event: Registered Node ha-782425-m03 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-782425-m03 event: Registered Node ha-782425-m03 in Controller
	
	
	Name:               ha-782425-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T18_29_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:29:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:32:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:30:01 +0000   Thu, 29 Aug 2024 18:29:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:30:01 +0000   Thu, 29 Aug 2024 18:29:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:30:01 +0000   Thu, 29 Aug 2024 18:29:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:30:01 +0000   Thu, 29 Aug 2024 18:29:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    ha-782425-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1d73c2cadaf4d3cb7d9a4d8e585f4dc
	  System UUID:                a1d73c2c-adaf-4d3c-b7d9-a4d8e585f4dc
	  Boot ID:                    91ce67f2-8b0c-469f-94e5-0736e893ec4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-lbjt6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-5xgbn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-782425-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-782425-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-782425-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-782425-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug29 18:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050223] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037711] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.717447] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.881000] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.439989] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug29 18:26] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.056184] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054002] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.164673] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.149154] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.266975] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +3.780708] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +4.381995] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.060319] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.240176] kauditd_printk_skb: 74 callbacks suppressed
	[  +3.218514] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +2.447866] kauditd_printk_skb: 26 callbacks suppressed
	[ +15.454195] kauditd_printk_skb: 38 callbacks suppressed
	[Aug29 18:27] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240] <==
	{"level":"warn","ts":"2024-08-29T18:32:32.324158Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.371384Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.378976Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.381362Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.392995Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.405020Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.412093Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.418963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.423195Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.423441Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.426653Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.434491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.441434Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.449166Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.452285Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.455658Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.467841Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.475400Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.482486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.485845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.488959Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.493234Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.498831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.506100Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:32:32.524083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:32:32 up 6 min,  0 users,  load average: 0.80, 0.36, 0.17
	Linux ha-782425 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c] <==
	I0829 18:31:58.595226       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:32:08.600858       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:32:08.600914       1 main.go:299] handling current node
	I0829 18:32:08.600939       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:32:08.600945       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:32:08.601129       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:32:08.601156       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:32:08.601216       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:32:08.601224       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:32:18.603446       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:32:18.603585       1 main.go:299] handling current node
	I0829 18:32:18.603765       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:32:18.603886       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:32:18.604080       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:32:18.604506       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:32:18.604710       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:32:18.604831       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:32:28.594926       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:32:28.595610       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:32:28.595965       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:32:28.596043       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:32:28.596166       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:32:28.596190       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:32:28.596271       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:32:28.596294       1 main.go:299] handling current node
	
	
	==> kube-apiserver [33ef8a4b863ba396265906bfb135c4d78e0ec7bc4e4863880325735a054ce292] <==
	W0829 18:26:17.714072       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.39]
	I0829 18:26:17.716025       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 18:26:17.743603       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0829 18:26:17.747325       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 18:26:21.823934       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 18:26:21.838572       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0829 18:26:21.851313       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 18:26:22.941214       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0829 18:26:23.439375       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0829 18:28:58.926967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55292: use of closed network connection
	E0829 18:28:59.113215       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55320: use of closed network connection
	E0829 18:28:59.292611       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55342: use of closed network connection
	E0829 18:28:59.473011       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55366: use of closed network connection
	E0829 18:28:59.661105       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55380: use of closed network connection
	E0829 18:28:59.845998       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55398: use of closed network connection
	E0829 18:29:00.022186       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55412: use of closed network connection
	E0829 18:29:00.189414       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55424: use of closed network connection
	E0829 18:29:00.362829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55438: use of closed network connection
	E0829 18:29:00.644380       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55458: use of closed network connection
	E0829 18:29:00.806979       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55478: use of closed network connection
	E0829 18:29:00.983208       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55498: use of closed network connection
	E0829 18:29:01.155072       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55510: use of closed network connection
	E0829 18:29:01.339608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55534: use of closed network connection
	E0829 18:29:01.514915       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55544: use of closed network connection
	W0829 18:30:27.718266       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.39]
	
	
	==> kube-controller-manager [24877a3e0c79c9cda3862a8eec226d8ab981fed9522707a5e23bf3114832c434] <==
	I0829 18:29:30.977193       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-782425-m04" podCIDRs=["10.244.3.0/24"]
	I0829 18:29:30.977270       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:30.977336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:30.977650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:31.197436       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:31.214845       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:31.572319       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:32.654224       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:32.655195       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-782425-m04"
	I0829 18:29:32.758004       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:35.155187       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:35.187216       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:41.068624       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:51.785588       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-782425-m04"
	I0829 18:29:51.786325       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:51.804165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:52.670397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:30:01.547137       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:30:47.695975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m02"
	I0829 18:30:47.696371       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-782425-m04"
	I0829 18:30:47.731104       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m02"
	I0829 18:30:47.835000       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.621605ms"
	I0829 18:30:47.835107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.342µs"
	I0829 18:30:50.263077       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m02"
	I0829 18:30:52.961688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m02"
	
	
	==> kube-proxy [2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:26:24.489515       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:26:24.508455       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.39"]
	E0829 18:26:24.508982       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:26:24.569427       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:26:24.569483       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:26:24.569507       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:26:24.571810       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:26:24.572218       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:26:24.572452       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:26:24.574533       1 config.go:197] "Starting service config controller"
	I0829 18:26:24.574604       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:26:24.574657       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:26:24.574676       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:26:24.577339       1 config.go:326] "Starting node config controller"
	I0829 18:26:24.577371       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:26:24.675657       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 18:26:24.675685       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:26:24.677430       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7] <==
	E0829 18:26:16.986990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:26:16.999915       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 18:26:16.999961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:26:17.315220       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:26:17.315325       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 18:26:20.267554       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 18:28:53.643358       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="6a403b21-4f43-4128-a1b9-b4d805e7d5b2" pod="default/busybox-7dff88458-rsqqv" assumedNode="ha-782425-m02" currentNode="ha-782425-m03"
	E0829 18:28:53.651947       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rsqqv\": pod busybox-7dff88458-rsqqv is already assigned to node \"ha-782425-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rsqqv" node="ha-782425-m03"
	E0829 18:28:53.652365       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6a403b21-4f43-4128-a1b9-b4d805e7d5b2(default/busybox-7dff88458-rsqqv) was assumed on ha-782425-m03 but assigned to ha-782425-m02" pod="default/busybox-7dff88458-rsqqv"
	E0829 18:28:53.652538       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rsqqv\": pod busybox-7dff88458-rsqqv is already assigned to node \"ha-782425-m02\"" pod="default/busybox-7dff88458-rsqqv"
	I0829 18:28:53.652740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rsqqv" node="ha-782425-m02"
	E0829 18:28:53.677627       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-h8k94\": pod busybox-7dff88458-h8k94 is already assigned to node \"ha-782425-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-h8k94" node="ha-782425-m03"
	E0829 18:28:53.677952       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-h8k94\": pod busybox-7dff88458-h8k94 is already assigned to node \"ha-782425-m03\"" pod="default/busybox-7dff88458-h8k94"
	E0829 18:28:53.695276       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vwgrt\": pod busybox-7dff88458-vwgrt is already assigned to node \"ha-782425\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vwgrt" node="ha-782425"
	E0829 18:28:53.695376       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0e10fff1-6582-4f04-a07b-bd664457f72d(default/busybox-7dff88458-vwgrt) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-vwgrt"
	E0829 18:28:53.695398       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vwgrt\": pod busybox-7dff88458-vwgrt is already assigned to node \"ha-782425\"" pod="default/busybox-7dff88458-vwgrt"
	I0829 18:28:53.695418       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-vwgrt" node="ha-782425"
	E0829 18:29:31.044983       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lbjt6\": pod kindnet-lbjt6 is already assigned to node \"ha-782425-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-lbjt6" node="ha-782425-m04"
	E0829 18:29:31.045106       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ee67d98e-b169-415c-ac85-e253e2888144(kube-system/kindnet-lbjt6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lbjt6"
	E0829 18:29:31.045132       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lbjt6\": pod kindnet-lbjt6 is already assigned to node \"ha-782425-m04\"" pod="kube-system/kindnet-lbjt6"
	I0829 18:29:31.045177       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lbjt6" node="ha-782425-m04"
	E0829 18:29:31.045921       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5xgbn\": pod kube-proxy-5xgbn is already assigned to node \"ha-782425-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5xgbn" node="ha-782425-m04"
	E0829 18:29:31.045987       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 278c58ce-3b1f-45c5-a1c9-0d2ce710f092(kube-system/kube-proxy-5xgbn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5xgbn"
	E0829 18:29:31.046008       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5xgbn\": pod kube-proxy-5xgbn is already assigned to node \"ha-782425-m04\"" pod="kube-system/kube-proxy-5xgbn"
	I0829 18:29:31.046027       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5xgbn" node="ha-782425-m04"
	
	
	==> kubelet <==
	Aug 29 18:31:21 ha-782425 kubelet[1321]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 18:31:21 ha-782425 kubelet[1321]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 18:31:21 ha-782425 kubelet[1321]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 18:31:21 ha-782425 kubelet[1321]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 18:31:21 ha-782425 kubelet[1321]: E0829 18:31:21.861665    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956281861406149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:31:21 ha-782425 kubelet[1321]: E0829 18:31:21.861699    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956281861406149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:31:31 ha-782425 kubelet[1321]: E0829 18:31:31.863301    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956291862611137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:31:31 ha-782425 kubelet[1321]: E0829 18:31:31.863750    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956291862611137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:31:41 ha-782425 kubelet[1321]: E0829 18:31:41.865447    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956301865061846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:31:41 ha-782425 kubelet[1321]: E0829 18:31:41.865507    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956301865061846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:31:51 ha-782425 kubelet[1321]: E0829 18:31:51.867066    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956311866689909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:31:51 ha-782425 kubelet[1321]: E0829 18:31:51.867109    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956311866689909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:01 ha-782425 kubelet[1321]: E0829 18:32:01.869104    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956321868694924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:01 ha-782425 kubelet[1321]: E0829 18:32:01.869143    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956321868694924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:11 ha-782425 kubelet[1321]: E0829 18:32:11.870647    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956331870124306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:11 ha-782425 kubelet[1321]: E0829 18:32:11.871636    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956331870124306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:21 ha-782425 kubelet[1321]: E0829 18:32:21.761126    1321 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 18:32:21 ha-782425 kubelet[1321]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 18:32:21 ha-782425 kubelet[1321]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 18:32:21 ha-782425 kubelet[1321]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 18:32:21 ha-782425 kubelet[1321]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 18:32:21 ha-782425 kubelet[1321]: E0829 18:32:21.873973    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956341873360262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:21 ha-782425 kubelet[1321]: E0829 18:32:21.874024    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956341873360262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:31 ha-782425 kubelet[1321]: E0829 18:32:31.876827    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956351875981359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:31 ha-782425 kubelet[1321]: E0829 18:32:31.876867    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956351875981359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-782425 -n ha-782425
helpers_test.go:261: (dbg) Run:  kubectl --context ha-782425 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr: exit status 3 (3.216695638s)

                                                
                                                
-- stdout --
	ha-782425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-782425-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:32:37.049963   36649 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:32:37.050249   36649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:32:37.050262   36649 out.go:358] Setting ErrFile to fd 2...
	I0829 18:32:37.050267   36649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:32:37.050460   36649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:32:37.050632   36649 out.go:352] Setting JSON to false
	I0829 18:32:37.050658   36649 mustload.go:65] Loading cluster: ha-782425
	I0829 18:32:37.050787   36649 notify.go:220] Checking for updates...
	I0829 18:32:37.051114   36649 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:32:37.051136   36649 status.go:255] checking status of ha-782425 ...
	I0829 18:32:37.051598   36649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:37.051660   36649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:37.071499   36649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34137
	I0829 18:32:37.071916   36649 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:37.072399   36649 main.go:141] libmachine: Using API Version  1
	I0829 18:32:37.072419   36649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:37.072724   36649 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:37.072945   36649 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:32:37.074411   36649 status.go:330] ha-782425 host status = "Running" (err=<nil>)
	I0829 18:32:37.074427   36649 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:32:37.074732   36649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:37.074765   36649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:37.090000   36649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40499
	I0829 18:32:37.090401   36649 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:37.090847   36649 main.go:141] libmachine: Using API Version  1
	I0829 18:32:37.090865   36649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:37.091176   36649 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:37.091342   36649 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:32:37.094391   36649 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:37.094843   36649 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:32:37.094861   36649 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:37.095042   36649 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:32:37.095334   36649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:37.095373   36649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:37.110609   36649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37025
	I0829 18:32:37.111004   36649 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:37.111427   36649 main.go:141] libmachine: Using API Version  1
	I0829 18:32:37.111444   36649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:37.111786   36649 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:37.111943   36649 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:32:37.112139   36649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:37.112171   36649 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:32:37.114949   36649 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:37.115421   36649 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:32:37.115461   36649 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:37.115470   36649 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:32:37.115683   36649 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:32:37.115812   36649 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:32:37.115945   36649 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:32:37.197100   36649 ssh_runner.go:195] Run: systemctl --version
	I0829 18:32:37.202829   36649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:37.218590   36649 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:32:37.218619   36649 api_server.go:166] Checking apiserver status ...
	I0829 18:32:37.218651   36649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:32:37.235060   36649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup
	W0829 18:32:37.244831   36649 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:32:37.244883   36649 ssh_runner.go:195] Run: ls
	I0829 18:32:37.248874   36649 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:32:37.252953   36649 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:32:37.252974   36649 status.go:422] ha-782425 apiserver status = Running (err=<nil>)
	I0829 18:32:37.252986   36649 status.go:257] ha-782425 status: &{Name:ha-782425 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:32:37.253006   36649 status.go:255] checking status of ha-782425-m02 ...
	I0829 18:32:37.253287   36649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:37.253332   36649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:37.268232   36649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0829 18:32:37.268643   36649 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:37.269091   36649 main.go:141] libmachine: Using API Version  1
	I0829 18:32:37.269110   36649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:37.269411   36649 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:37.269611   36649 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:32:37.270923   36649 status.go:330] ha-782425-m02 host status = "Running" (err=<nil>)
	I0829 18:32:37.270941   36649 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:32:37.271243   36649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:37.271273   36649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:37.286081   36649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I0829 18:32:37.286468   36649 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:37.286889   36649 main.go:141] libmachine: Using API Version  1
	I0829 18:32:37.286915   36649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:37.287268   36649 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:37.287464   36649 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:32:37.289932   36649 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:37.290400   36649 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:32:37.290437   36649 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:37.290564   36649 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:32:37.290850   36649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:37.290885   36649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:37.305952   36649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41639
	I0829 18:32:37.306394   36649 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:37.306848   36649 main.go:141] libmachine: Using API Version  1
	I0829 18:32:37.306867   36649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:37.307174   36649 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:37.307345   36649 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:32:37.307528   36649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:37.307550   36649 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:32:37.309923   36649 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:37.310310   36649 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:32:37.310349   36649 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:37.310522   36649 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:32:37.310685   36649 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:32:37.310857   36649 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:32:37.311010   36649 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	W0829 18:32:39.870376   36649 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.253:22: connect: no route to host
	W0829 18:32:39.870471   36649 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	E0829 18:32:39.870489   36649 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:39.870518   36649 status.go:257] ha-782425-m02 status: &{Name:ha-782425-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 18:32:39.870535   36649 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:39.870542   36649 status.go:255] checking status of ha-782425-m03 ...
	I0829 18:32:39.870837   36649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:39.870880   36649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:39.887459   36649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38759
	I0829 18:32:39.887884   36649 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:39.888411   36649 main.go:141] libmachine: Using API Version  1
	I0829 18:32:39.888440   36649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:39.888823   36649 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:39.889095   36649 main.go:141] libmachine: (ha-782425-m03) Calling .GetState
	I0829 18:32:39.890811   36649 status.go:330] ha-782425-m03 host status = "Running" (err=<nil>)
	I0829 18:32:39.890829   36649 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:32:39.891241   36649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:39.891283   36649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:39.908720   36649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39817
	I0829 18:32:39.909143   36649 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:39.909624   36649 main.go:141] libmachine: Using API Version  1
	I0829 18:32:39.909645   36649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:39.909947   36649 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:39.910138   36649 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:32:39.913065   36649 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:39.913523   36649 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:32:39.913547   36649 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:39.913689   36649 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:32:39.914005   36649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:39.914055   36649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:39.929584   36649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I0829 18:32:39.930105   36649 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:39.930663   36649 main.go:141] libmachine: Using API Version  1
	I0829 18:32:39.930683   36649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:39.930986   36649 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:39.931165   36649 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:32:39.931328   36649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:39.931358   36649 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:32:39.934237   36649 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:39.934718   36649 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:32:39.934754   36649 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:39.934884   36649 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:32:39.935055   36649 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:32:39.935208   36649 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:32:39.935344   36649 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:32:40.017465   36649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:40.034890   36649 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:32:40.034923   36649 api_server.go:166] Checking apiserver status ...
	I0829 18:32:40.034973   36649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:32:40.049872   36649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0829 18:32:40.060033   36649 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:32:40.060090   36649 ssh_runner.go:195] Run: ls
	I0829 18:32:40.064013   36649 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:32:40.068395   36649 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:32:40.068418   36649 status.go:422] ha-782425-m03 apiserver status = Running (err=<nil>)
	I0829 18:32:40.068429   36649 status.go:257] ha-782425-m03 status: &{Name:ha-782425-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:32:40.068447   36649 status.go:255] checking status of ha-782425-m04 ...
	I0829 18:32:40.068728   36649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:40.068761   36649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:40.084753   36649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I0829 18:32:40.085170   36649 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:40.085655   36649 main.go:141] libmachine: Using API Version  1
	I0829 18:32:40.085681   36649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:40.085978   36649 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:40.086208   36649 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:32:40.087791   36649 status.go:330] ha-782425-m04 host status = "Running" (err=<nil>)
	I0829 18:32:40.087809   36649 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:32:40.088092   36649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:40.088125   36649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:40.103575   36649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36209
	I0829 18:32:40.103979   36649 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:40.104440   36649 main.go:141] libmachine: Using API Version  1
	I0829 18:32:40.104463   36649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:40.104730   36649 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:40.104948   36649 main.go:141] libmachine: (ha-782425-m04) Calling .GetIP
	I0829 18:32:40.107883   36649 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:40.108295   36649 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:32:40.108324   36649 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:40.108717   36649 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:32:40.108992   36649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:40.109034   36649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:40.124915   36649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I0829 18:32:40.125360   36649 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:40.125858   36649 main.go:141] libmachine: Using API Version  1
	I0829 18:32:40.125888   36649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:40.126240   36649 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:40.126440   36649 main.go:141] libmachine: (ha-782425-m04) Calling .DriverName
	I0829 18:32:40.126676   36649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:40.126694   36649 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHHostname
	I0829 18:32:40.129705   36649 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:40.130291   36649 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:32:40.130316   36649 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:40.130480   36649 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHPort
	I0829 18:32:40.130670   36649 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHKeyPath
	I0829 18:32:40.130838   36649 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHUsername
	I0829 18:32:40.130993   36649 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m04/id_rsa Username:docker}
	I0829 18:32:40.213061   36649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:40.226685   36649 status.go:257] ha-782425-m04 status: &{Name:ha-782425-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr: exit status 3 (4.834155711s)

                                                
                                                
-- stdout --
	ha-782425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-782425-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:32:41.560418   36750 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:32:41.560539   36750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:32:41.560548   36750 out.go:358] Setting ErrFile to fd 2...
	I0829 18:32:41.560553   36750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:32:41.560719   36750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:32:41.560895   36750 out.go:352] Setting JSON to false
	I0829 18:32:41.560920   36750 mustload.go:65] Loading cluster: ha-782425
	I0829 18:32:41.560968   36750 notify.go:220] Checking for updates...
	I0829 18:32:41.561442   36750 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:32:41.561462   36750 status.go:255] checking status of ha-782425 ...
	I0829 18:32:41.561890   36750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:41.561958   36750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:41.581436   36750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0829 18:32:41.581850   36750 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:41.582445   36750 main.go:141] libmachine: Using API Version  1
	I0829 18:32:41.582470   36750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:41.582886   36750 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:41.583078   36750 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:32:41.584694   36750 status.go:330] ha-782425 host status = "Running" (err=<nil>)
	I0829 18:32:41.584706   36750 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:32:41.584964   36750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:41.584994   36750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:41.599154   36750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41473
	I0829 18:32:41.599496   36750 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:41.599930   36750 main.go:141] libmachine: Using API Version  1
	I0829 18:32:41.599948   36750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:41.600224   36750 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:41.600381   36750 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:32:41.602935   36750 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:41.603377   36750 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:32:41.603402   36750 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:41.603538   36750 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:32:41.603813   36750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:41.603860   36750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:41.618473   36750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I0829 18:32:41.618810   36750 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:41.619253   36750 main.go:141] libmachine: Using API Version  1
	I0829 18:32:41.619275   36750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:41.619586   36750 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:41.619787   36750 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:32:41.619950   36750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:41.619985   36750 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:32:41.622264   36750 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:41.622680   36750 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:32:41.622716   36750 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:41.622812   36750 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:32:41.622970   36750 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:32:41.623099   36750 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:32:41.623276   36750 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:32:41.704910   36750 ssh_runner.go:195] Run: systemctl --version
	I0829 18:32:41.710732   36750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:41.724698   36750 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:32:41.724727   36750 api_server.go:166] Checking apiserver status ...
	I0829 18:32:41.724755   36750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:32:41.742612   36750 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup
	W0829 18:32:41.752062   36750 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:32:41.752129   36750 ssh_runner.go:195] Run: ls
	I0829 18:32:41.756360   36750 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:32:41.761092   36750 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:32:41.761115   36750 status.go:422] ha-782425 apiserver status = Running (err=<nil>)
	I0829 18:32:41.761130   36750 status.go:257] ha-782425 status: &{Name:ha-782425 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:32:41.761152   36750 status.go:255] checking status of ha-782425-m02 ...
	I0829 18:32:41.761574   36750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:41.761619   36750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:41.776716   36750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I0829 18:32:41.777209   36750 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:41.777756   36750 main.go:141] libmachine: Using API Version  1
	I0829 18:32:41.777781   36750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:41.778116   36750 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:41.778363   36750 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:32:41.780065   36750 status.go:330] ha-782425-m02 host status = "Running" (err=<nil>)
	I0829 18:32:41.780084   36750 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:32:41.780554   36750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:41.780601   36750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:41.796276   36750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45547
	I0829 18:32:41.796737   36750 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:41.797212   36750 main.go:141] libmachine: Using API Version  1
	I0829 18:32:41.797233   36750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:41.797582   36750 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:41.797797   36750 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:32:41.800618   36750 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:41.800986   36750 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:32:41.801004   36750 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:41.801125   36750 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:32:41.801436   36750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:41.801465   36750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:41.816178   36750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46757
	I0829 18:32:41.816641   36750 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:41.817135   36750 main.go:141] libmachine: Using API Version  1
	I0829 18:32:41.817158   36750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:41.817445   36750 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:41.817607   36750 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:32:41.817788   36750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:41.817812   36750 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:32:41.820588   36750 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:41.821035   36750 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:32:41.821062   36750 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:41.821197   36750 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:32:41.821369   36750 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:32:41.821505   36750 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:32:41.821634   36750 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	W0829 18:32:42.946421   36750 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:42.946473   36750 retry.go:31] will retry after 162.727952ms: dial tcp 192.168.39.253:22: connect: no route to host
	W0829 18:32:46.014329   36750 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.253:22: connect: no route to host
	W0829 18:32:46.014406   36750 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	E0829 18:32:46.014423   36750 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:46.014436   36750 status.go:257] ha-782425-m02 status: &{Name:ha-782425-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 18:32:46.014470   36750 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:46.014480   36750 status.go:255] checking status of ha-782425-m03 ...
	I0829 18:32:46.014795   36750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:46.014837   36750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:46.029686   36750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
	I0829 18:32:46.030143   36750 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:46.030621   36750 main.go:141] libmachine: Using API Version  1
	I0829 18:32:46.030647   36750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:46.030944   36750 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:46.031139   36750 main.go:141] libmachine: (ha-782425-m03) Calling .GetState
	I0829 18:32:46.032459   36750 status.go:330] ha-782425-m03 host status = "Running" (err=<nil>)
	I0829 18:32:46.032483   36750 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:32:46.032796   36750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:46.032835   36750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:46.047565   36750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39801
	I0829 18:32:46.047965   36750 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:46.048352   36750 main.go:141] libmachine: Using API Version  1
	I0829 18:32:46.048460   36750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:46.048874   36750 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:46.049081   36750 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:32:46.051913   36750 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:46.052331   36750 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:32:46.052360   36750 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:46.052482   36750 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:32:46.052814   36750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:46.052854   36750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:46.068341   36750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I0829 18:32:46.068763   36750 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:46.069232   36750 main.go:141] libmachine: Using API Version  1
	I0829 18:32:46.069253   36750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:46.069559   36750 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:46.069744   36750 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:32:46.069895   36750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:46.069916   36750 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:32:46.072764   36750 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:46.073135   36750 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:32:46.073164   36750 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:46.073313   36750 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:32:46.073461   36750 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:32:46.073612   36750 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:32:46.073711   36750 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:32:46.150106   36750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:46.168876   36750 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:32:46.168897   36750 api_server.go:166] Checking apiserver status ...
	I0829 18:32:46.168927   36750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:32:46.183282   36750 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0829 18:32:46.192087   36750 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:32:46.192163   36750 ssh_runner.go:195] Run: ls
	I0829 18:32:46.196046   36750 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:32:46.200494   36750 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:32:46.200511   36750 status.go:422] ha-782425-m03 apiserver status = Running (err=<nil>)
	I0829 18:32:46.200519   36750 status.go:257] ha-782425-m03 status: &{Name:ha-782425-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:32:46.200535   36750 status.go:255] checking status of ha-782425-m04 ...
	I0829 18:32:46.200829   36750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:46.200859   36750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:46.216469   36750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38707
	I0829 18:32:46.216951   36750 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:46.217437   36750 main.go:141] libmachine: Using API Version  1
	I0829 18:32:46.217478   36750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:46.217794   36750 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:46.217974   36750 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:32:46.219789   36750 status.go:330] ha-782425-m04 host status = "Running" (err=<nil>)
	I0829 18:32:46.219803   36750 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:32:46.220101   36750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:46.220135   36750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:46.236296   36750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I0829 18:32:46.236728   36750 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:46.237213   36750 main.go:141] libmachine: Using API Version  1
	I0829 18:32:46.237233   36750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:46.237536   36750 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:46.237696   36750 main.go:141] libmachine: (ha-782425-m04) Calling .GetIP
	I0829 18:32:46.240394   36750 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:46.240791   36750 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:32:46.240814   36750 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:46.240946   36750 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:32:46.241228   36750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:46.241260   36750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:46.256061   36750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42237
	I0829 18:32:46.256533   36750 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:46.257010   36750 main.go:141] libmachine: Using API Version  1
	I0829 18:32:46.257043   36750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:46.257387   36750 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:46.257583   36750 main.go:141] libmachine: (ha-782425-m04) Calling .DriverName
	I0829 18:32:46.257772   36750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:46.257795   36750 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHHostname
	I0829 18:32:46.260451   36750 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:46.260835   36750 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:32:46.260855   36750 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:46.260984   36750 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHPort
	I0829 18:32:46.261151   36750 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHKeyPath
	I0829 18:32:46.261284   36750 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHUsername
	I0829 18:32:46.261417   36750 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m04/id_rsa Username:docker}
	I0829 18:32:46.341150   36750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:46.353906   36750 status.go:257] ha-782425-m04 status: &{Name:ha-782425-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr: exit status 3 (4.122902355s)

                                                
                                                
-- stdout --
	ha-782425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-782425-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:32:48.541413   36858 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:32:48.541511   36858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:32:48.541516   36858 out.go:358] Setting ErrFile to fd 2...
	I0829 18:32:48.541520   36858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:32:48.541698   36858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:32:48.541850   36858 out.go:352] Setting JSON to false
	I0829 18:32:48.541873   36858 mustload.go:65] Loading cluster: ha-782425
	I0829 18:32:48.541997   36858 notify.go:220] Checking for updates...
	I0829 18:32:48.542276   36858 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:32:48.542291   36858 status.go:255] checking status of ha-782425 ...
	I0829 18:32:48.542658   36858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:48.542696   36858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:48.562656   36858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44033
	I0829 18:32:48.563086   36858 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:48.563737   36858 main.go:141] libmachine: Using API Version  1
	I0829 18:32:48.563765   36858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:48.564090   36858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:48.564301   36858 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:32:48.565840   36858 status.go:330] ha-782425 host status = "Running" (err=<nil>)
	I0829 18:32:48.565854   36858 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:32:48.566201   36858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:48.566238   36858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:48.581125   36858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42611
	I0829 18:32:48.581535   36858 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:48.582057   36858 main.go:141] libmachine: Using API Version  1
	I0829 18:32:48.582078   36858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:48.582458   36858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:48.582684   36858 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:32:48.586399   36858 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:48.586914   36858 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:32:48.586938   36858 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:48.587119   36858 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:32:48.587527   36858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:48.587596   36858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:48.602748   36858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I0829 18:32:48.603187   36858 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:48.603643   36858 main.go:141] libmachine: Using API Version  1
	I0829 18:32:48.603669   36858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:48.603927   36858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:48.604082   36858 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:32:48.604273   36858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:48.604309   36858 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:32:48.607074   36858 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:48.607563   36858 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:32:48.607587   36858 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:48.607734   36858 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:32:48.607932   36858 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:32:48.608058   36858 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:32:48.608201   36858 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:32:48.689585   36858 ssh_runner.go:195] Run: systemctl --version
	I0829 18:32:48.695193   36858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:48.708902   36858 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:32:48.708936   36858 api_server.go:166] Checking apiserver status ...
	I0829 18:32:48.708967   36858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:32:48.722499   36858 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup
	W0829 18:32:48.731603   36858 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:32:48.731666   36858 ssh_runner.go:195] Run: ls
	I0829 18:32:48.736435   36858 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:32:48.742407   36858 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:32:48.742440   36858 status.go:422] ha-782425 apiserver status = Running (err=<nil>)
	I0829 18:32:48.742453   36858 status.go:257] ha-782425 status: &{Name:ha-782425 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:32:48.742484   36858 status.go:255] checking status of ha-782425-m02 ...
	I0829 18:32:48.742802   36858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:48.742847   36858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:48.757762   36858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37697
	I0829 18:32:48.758163   36858 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:48.758670   36858 main.go:141] libmachine: Using API Version  1
	I0829 18:32:48.758696   36858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:48.758981   36858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:48.759164   36858 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:32:48.760680   36858 status.go:330] ha-782425-m02 host status = "Running" (err=<nil>)
	I0829 18:32:48.760696   36858 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:32:48.761007   36858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:48.761047   36858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:48.776808   36858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0829 18:32:48.777230   36858 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:48.777699   36858 main.go:141] libmachine: Using API Version  1
	I0829 18:32:48.777723   36858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:48.778079   36858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:48.778281   36858 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:32:48.781183   36858 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:48.781657   36858 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:32:48.781683   36858 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:48.781795   36858 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:32:48.782104   36858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:48.782145   36858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:48.796901   36858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37071
	I0829 18:32:48.797295   36858 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:48.797725   36858 main.go:141] libmachine: Using API Version  1
	I0829 18:32:48.797746   36858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:48.798022   36858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:48.798225   36858 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:32:48.798444   36858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:48.798462   36858 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:32:48.801054   36858 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:48.801438   36858 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:32:48.801457   36858 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:48.801620   36858 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:32:48.801768   36858 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:32:48.801912   36858 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:32:48.802052   36858 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	W0829 18:32:49.086397   36858 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:49.086464   36858 retry.go:31] will retry after 127.374578ms: dial tcp 192.168.39.253:22: connect: no route to host
	W0829 18:32:52.286422   36858 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.253:22: connect: no route to host
	W0829 18:32:52.286520   36858 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	E0829 18:32:52.286540   36858 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:52.286550   36858 status.go:257] ha-782425-m02 status: &{Name:ha-782425-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 18:32:52.286570   36858 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:52.286578   36858 status.go:255] checking status of ha-782425-m03 ...
	I0829 18:32:52.286876   36858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:52.286914   36858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:52.302184   36858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0829 18:32:52.302601   36858 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:52.303083   36858 main.go:141] libmachine: Using API Version  1
	I0829 18:32:52.303113   36858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:52.303440   36858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:52.303625   36858 main.go:141] libmachine: (ha-782425-m03) Calling .GetState
	I0829 18:32:52.305308   36858 status.go:330] ha-782425-m03 host status = "Running" (err=<nil>)
	I0829 18:32:52.305336   36858 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:32:52.305623   36858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:52.305655   36858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:52.320676   36858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0829 18:32:52.321061   36858 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:52.321467   36858 main.go:141] libmachine: Using API Version  1
	I0829 18:32:52.321486   36858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:52.321891   36858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:52.322114   36858 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:32:52.324894   36858 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:52.325419   36858 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:32:52.325448   36858 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:52.325584   36858 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:32:52.325925   36858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:52.325968   36858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:52.341217   36858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
	I0829 18:32:52.341667   36858 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:52.342185   36858 main.go:141] libmachine: Using API Version  1
	I0829 18:32:52.342219   36858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:52.342510   36858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:52.342706   36858 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:32:52.342900   36858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:52.342920   36858 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:32:52.345735   36858 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:52.346159   36858 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:32:52.346191   36858 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:52.346329   36858 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:32:52.346524   36858 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:32:52.346711   36858 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:32:52.346831   36858 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:32:52.425218   36858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:52.441168   36858 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:32:52.441204   36858 api_server.go:166] Checking apiserver status ...
	I0829 18:32:52.441244   36858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:32:52.455347   36858 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0829 18:32:52.463862   36858 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:32:52.463909   36858 ssh_runner.go:195] Run: ls
	I0829 18:32:52.467858   36858 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:32:52.471869   36858 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:32:52.471890   36858 status.go:422] ha-782425-m03 apiserver status = Running (err=<nil>)
	I0829 18:32:52.471899   36858 status.go:257] ha-782425-m03 status: &{Name:ha-782425-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:32:52.471928   36858 status.go:255] checking status of ha-782425-m04 ...
	I0829 18:32:52.472244   36858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:52.472315   36858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:52.487249   36858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43767
	I0829 18:32:52.487741   36858 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:52.488246   36858 main.go:141] libmachine: Using API Version  1
	I0829 18:32:52.488265   36858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:52.488693   36858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:52.488866   36858 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:32:52.490380   36858 status.go:330] ha-782425-m04 host status = "Running" (err=<nil>)
	I0829 18:32:52.490398   36858 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:32:52.490671   36858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:52.490717   36858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:52.504908   36858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44531
	I0829 18:32:52.505308   36858 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:52.505753   36858 main.go:141] libmachine: Using API Version  1
	I0829 18:32:52.505775   36858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:52.506040   36858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:52.506188   36858 main.go:141] libmachine: (ha-782425-m04) Calling .GetIP
	I0829 18:32:52.508664   36858 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:52.509090   36858 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:32:52.509122   36858 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:52.509193   36858 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:32:52.509546   36858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:52.509587   36858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:52.524300   36858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0829 18:32:52.524628   36858 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:52.525040   36858 main.go:141] libmachine: Using API Version  1
	I0829 18:32:52.525059   36858 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:52.525326   36858 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:52.525544   36858 main.go:141] libmachine: (ha-782425-m04) Calling .DriverName
	I0829 18:32:52.525720   36858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:52.525742   36858 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHHostname
	I0829 18:32:52.528287   36858 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:52.528715   36858 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:32:52.528741   36858 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:52.528847   36858 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHPort
	I0829 18:32:52.529003   36858 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHKeyPath
	I0829 18:32:52.529127   36858 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHUsername
	I0829 18:32:52.529260   36858 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m04/id_rsa Username:docker}
	I0829 18:32:52.609221   36858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:52.623472   36858 status.go:257] ha-782425-m04 status: &{Name:ha-782425-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr: exit status 3 (4.683493925s)

                                                
                                                
-- stdout --
	ha-782425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-782425-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:32:54.253937   36958 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:32:54.254216   36958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:32:54.254227   36958 out.go:358] Setting ErrFile to fd 2...
	I0829 18:32:54.254233   36958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:32:54.254478   36958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:32:54.254725   36958 out.go:352] Setting JSON to false
	I0829 18:32:54.254759   36958 mustload.go:65] Loading cluster: ha-782425
	I0829 18:32:54.254884   36958 notify.go:220] Checking for updates...
	I0829 18:32:54.255147   36958 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:32:54.255176   36958 status.go:255] checking status of ha-782425 ...
	I0829 18:32:54.255654   36958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:54.255725   36958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:54.274859   36958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40827
	I0829 18:32:54.275332   36958 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:54.275935   36958 main.go:141] libmachine: Using API Version  1
	I0829 18:32:54.275960   36958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:54.276284   36958 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:54.276457   36958 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:32:54.277927   36958 status.go:330] ha-782425 host status = "Running" (err=<nil>)
	I0829 18:32:54.277940   36958 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:32:54.278377   36958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:54.278424   36958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:54.292862   36958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35289
	I0829 18:32:54.293270   36958 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:54.293873   36958 main.go:141] libmachine: Using API Version  1
	I0829 18:32:54.293897   36958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:54.294216   36958 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:54.294406   36958 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:32:54.297136   36958 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:54.297621   36958 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:32:54.297640   36958 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:54.297800   36958 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:32:54.298343   36958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:54.298390   36958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:54.313305   36958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37973
	I0829 18:32:54.313772   36958 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:54.314243   36958 main.go:141] libmachine: Using API Version  1
	I0829 18:32:54.314268   36958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:54.314623   36958 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:54.314824   36958 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:32:54.315056   36958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:54.315083   36958 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:32:54.317698   36958 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:54.318109   36958 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:32:54.318147   36958 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:32:54.318239   36958 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:32:54.318410   36958 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:32:54.318555   36958 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:32:54.318667   36958 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:32:54.401849   36958 ssh_runner.go:195] Run: systemctl --version
	I0829 18:32:54.407520   36958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:54.422117   36958 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:32:54.422146   36958 api_server.go:166] Checking apiserver status ...
	I0829 18:32:54.422189   36958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:32:54.435176   36958 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup
	W0829 18:32:54.444130   36958 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:32:54.444182   36958 ssh_runner.go:195] Run: ls
	I0829 18:32:54.449152   36958 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:32:54.453430   36958 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:32:54.453451   36958 status.go:422] ha-782425 apiserver status = Running (err=<nil>)
	I0829 18:32:54.453463   36958 status.go:257] ha-782425 status: &{Name:ha-782425 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:32:54.453478   36958 status.go:255] checking status of ha-782425-m02 ...
	I0829 18:32:54.453760   36958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:54.453788   36958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:54.468357   36958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42781
	I0829 18:32:54.468739   36958 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:54.469198   36958 main.go:141] libmachine: Using API Version  1
	I0829 18:32:54.469212   36958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:54.469496   36958 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:54.469693   36958 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:32:54.471061   36958 status.go:330] ha-782425-m02 host status = "Running" (err=<nil>)
	I0829 18:32:54.471076   36958 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:32:54.471456   36958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:54.471492   36958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:54.486448   36958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36129
	I0829 18:32:54.486839   36958 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:54.487312   36958 main.go:141] libmachine: Using API Version  1
	I0829 18:32:54.487330   36958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:54.487623   36958 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:54.487797   36958 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:32:54.490266   36958 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:54.490673   36958 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:32:54.490699   36958 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:54.490797   36958 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:32:54.491082   36958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:54.491118   36958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:54.505705   36958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43519
	I0829 18:32:54.506119   36958 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:54.506545   36958 main.go:141] libmachine: Using API Version  1
	I0829 18:32:54.506563   36958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:54.506839   36958 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:54.506981   36958 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:32:54.507158   36958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:54.507176   36958 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:32:54.509536   36958 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:54.509889   36958 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:32:54.509913   36958 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:32:54.510070   36958 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:32:54.510239   36958 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:32:54.510372   36958 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:32:54.510532   36958 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	W0829 18:32:55.362334   36958 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:55.362378   36958 retry.go:31] will retry after 138.388101ms: dial tcp 192.168.39.253:22: connect: no route to host
	W0829 18:32:58.558337   36958 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.253:22: connect: no route to host
	W0829 18:32:58.558457   36958 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	E0829 18:32:58.558482   36958 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:58.558492   36958 status.go:257] ha-782425-m02 status: &{Name:ha-782425-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 18:32:58.558514   36958 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:32:58.558525   36958 status.go:255] checking status of ha-782425-m03 ...
	I0829 18:32:58.559053   36958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:58.559121   36958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:58.574488   36958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46407
	I0829 18:32:58.574845   36958 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:58.575272   36958 main.go:141] libmachine: Using API Version  1
	I0829 18:32:58.575299   36958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:58.575605   36958 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:58.575775   36958 main.go:141] libmachine: (ha-782425-m03) Calling .GetState
	I0829 18:32:58.577228   36958 status.go:330] ha-782425-m03 host status = "Running" (err=<nil>)
	I0829 18:32:58.577243   36958 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:32:58.577631   36958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:58.577673   36958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:58.592967   36958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I0829 18:32:58.593299   36958 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:58.593752   36958 main.go:141] libmachine: Using API Version  1
	I0829 18:32:58.593771   36958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:58.594077   36958 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:58.594262   36958 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:32:58.597355   36958 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:58.597807   36958 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:32:58.597831   36958 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:58.597959   36958 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:32:58.598416   36958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:58.598482   36958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:58.612829   36958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40051
	I0829 18:32:58.613232   36958 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:58.613648   36958 main.go:141] libmachine: Using API Version  1
	I0829 18:32:58.613670   36958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:58.613990   36958 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:58.614168   36958 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:32:58.614329   36958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:58.614347   36958 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:32:58.616846   36958 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:58.617260   36958 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:32:58.617286   36958 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:32:58.617458   36958 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:32:58.617613   36958 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:32:58.617776   36958 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:32:58.617923   36958 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:32:58.693011   36958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:58.706910   36958 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:32:58.706933   36958 api_server.go:166] Checking apiserver status ...
	I0829 18:32:58.706970   36958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:32:58.720362   36958 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0829 18:32:58.729751   36958 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:32:58.729801   36958 ssh_runner.go:195] Run: ls
	I0829 18:32:58.733725   36958 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:32:58.739714   36958 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:32:58.739733   36958 status.go:422] ha-782425-m03 apiserver status = Running (err=<nil>)
	I0829 18:32:58.739743   36958 status.go:257] ha-782425-m03 status: &{Name:ha-782425-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:32:58.739762   36958 status.go:255] checking status of ha-782425-m04 ...
	I0829 18:32:58.740047   36958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:58.740088   36958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:58.756188   36958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0829 18:32:58.756592   36958 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:58.757062   36958 main.go:141] libmachine: Using API Version  1
	I0829 18:32:58.757082   36958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:58.757382   36958 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:58.757549   36958 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:32:58.759053   36958 status.go:330] ha-782425-m04 host status = "Running" (err=<nil>)
	I0829 18:32:58.759067   36958 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:32:58.759382   36958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:58.759421   36958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:58.774132   36958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43617
	I0829 18:32:58.774574   36958 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:58.775046   36958 main.go:141] libmachine: Using API Version  1
	I0829 18:32:58.775068   36958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:58.775349   36958 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:58.775526   36958 main.go:141] libmachine: (ha-782425-m04) Calling .GetIP
	I0829 18:32:58.778358   36958 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:58.778807   36958 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:32:58.778832   36958 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:58.778965   36958 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:32:58.779288   36958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:32:58.779322   36958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:32:58.794577   36958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
	I0829 18:32:58.795013   36958 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:32:58.795509   36958 main.go:141] libmachine: Using API Version  1
	I0829 18:32:58.795527   36958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:32:58.795838   36958 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:32:58.796009   36958 main.go:141] libmachine: (ha-782425-m04) Calling .DriverName
	I0829 18:32:58.796204   36958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:32:58.796225   36958 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHHostname
	I0829 18:32:58.798925   36958 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:58.799343   36958 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:32:58.799384   36958 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:32:58.799520   36958 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHPort
	I0829 18:32:58.799688   36958 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHKeyPath
	I0829 18:32:58.799793   36958 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHUsername
	I0829 18:32:58.799896   36958 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m04/id_rsa Username:docker}
	I0829 18:32:58.881154   36958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:32:58.894984   36958 status.go:257] ha-782425-m04 status: &{Name:ha-782425-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr: exit status 3 (3.717883404s)

                                                
                                                
-- stdout --
	ha-782425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-782425-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:33:03.215242   37075 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:33:03.215514   37075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:33:03.215528   37075 out.go:358] Setting ErrFile to fd 2...
	I0829 18:33:03.215534   37075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:33:03.215804   37075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:33:03.216016   37075 out.go:352] Setting JSON to false
	I0829 18:33:03.216049   37075 mustload.go:65] Loading cluster: ha-782425
	I0829 18:33:03.216159   37075 notify.go:220] Checking for updates...
	I0829 18:33:03.216594   37075 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:33:03.216611   37075 status.go:255] checking status of ha-782425 ...
	I0829 18:33:03.216987   37075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:03.217042   37075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:03.235473   37075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37167
	I0829 18:33:03.235917   37075 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:03.236525   37075 main.go:141] libmachine: Using API Version  1
	I0829 18:33:03.236552   37075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:03.236924   37075 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:03.237141   37075 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:33:03.238660   37075 status.go:330] ha-782425 host status = "Running" (err=<nil>)
	I0829 18:33:03.238678   37075 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:33:03.238966   37075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:03.238998   37075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:03.254517   37075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40405
	I0829 18:33:03.255012   37075 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:03.255477   37075 main.go:141] libmachine: Using API Version  1
	I0829 18:33:03.255491   37075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:03.255770   37075 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:03.255937   37075 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:33:03.258614   37075 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:03.259076   37075 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:33:03.259112   37075 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:03.259214   37075 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:33:03.259490   37075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:03.259520   37075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:03.274548   37075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I0829 18:33:03.274944   37075 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:03.275371   37075 main.go:141] libmachine: Using API Version  1
	I0829 18:33:03.275393   37075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:03.275724   37075 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:03.275896   37075 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:33:03.276044   37075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:03.276061   37075 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:33:03.278732   37075 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:03.279072   37075 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:33:03.279093   37075 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:03.279243   37075 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:33:03.279399   37075 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:33:03.279549   37075 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:33:03.279654   37075 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:33:03.361807   37075 ssh_runner.go:195] Run: systemctl --version
	I0829 18:33:03.369357   37075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:33:03.383477   37075 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:33:03.383508   37075 api_server.go:166] Checking apiserver status ...
	I0829 18:33:03.383537   37075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:33:03.398888   37075 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup
	W0829 18:33:03.410138   37075 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:33:03.410191   37075 ssh_runner.go:195] Run: ls
	I0829 18:33:03.414738   37075 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:33:03.420840   37075 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:33:03.420859   37075 status.go:422] ha-782425 apiserver status = Running (err=<nil>)
	I0829 18:33:03.420867   37075 status.go:257] ha-782425 status: &{Name:ha-782425 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:33:03.420882   37075 status.go:255] checking status of ha-782425-m02 ...
	I0829 18:33:03.421151   37075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:03.421182   37075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:03.435999   37075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I0829 18:33:03.436407   37075 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:03.436893   37075 main.go:141] libmachine: Using API Version  1
	I0829 18:33:03.436913   37075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:03.437219   37075 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:03.437405   37075 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:33:03.438853   37075 status.go:330] ha-782425-m02 host status = "Running" (err=<nil>)
	I0829 18:33:03.438869   37075 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:33:03.439156   37075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:03.439189   37075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:03.453722   37075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41583
	I0829 18:33:03.454119   37075 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:03.454626   37075 main.go:141] libmachine: Using API Version  1
	I0829 18:33:03.454650   37075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:03.454973   37075 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:03.455168   37075 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:33:03.458347   37075 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:33:03.458723   37075 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:33:03.458752   37075 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:33:03.458958   37075 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:33:03.459373   37075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:03.459419   37075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:03.475511   37075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I0829 18:33:03.475931   37075 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:03.476485   37075 main.go:141] libmachine: Using API Version  1
	I0829 18:33:03.476507   37075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:03.476789   37075 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:03.476978   37075 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:33:03.477130   37075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:03.477149   37075 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:33:03.480313   37075 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:33:03.480701   37075 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:33:03.480719   37075 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:33:03.480899   37075 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:33:03.481038   37075 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:33:03.481160   37075 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:33:03.481279   37075 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	W0829 18:33:06.558387   37075 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.253:22: connect: no route to host
	W0829 18:33:06.558463   37075 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	E0829 18:33:06.558477   37075 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:33:06.558498   37075 status.go:257] ha-782425-m02 status: &{Name:ha-782425-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 18:33:06.558516   37075 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0829 18:33:06.558528   37075 status.go:255] checking status of ha-782425-m03 ...
	I0829 18:33:06.558876   37075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:06.558935   37075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:06.573405   37075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39071
	I0829 18:33:06.573874   37075 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:06.574396   37075 main.go:141] libmachine: Using API Version  1
	I0829 18:33:06.574428   37075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:06.574771   37075 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:06.574954   37075 main.go:141] libmachine: (ha-782425-m03) Calling .GetState
	I0829 18:33:06.576257   37075 status.go:330] ha-782425-m03 host status = "Running" (err=<nil>)
	I0829 18:33:06.576280   37075 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:33:06.576611   37075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:06.576653   37075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:06.592039   37075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37045
	I0829 18:33:06.592479   37075 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:06.592900   37075 main.go:141] libmachine: Using API Version  1
	I0829 18:33:06.592922   37075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:06.593204   37075 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:06.593386   37075 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:33:06.595867   37075 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:06.596245   37075 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:33:06.596270   37075 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:06.596398   37075 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:33:06.596705   37075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:06.596739   37075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:06.612077   37075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41751
	I0829 18:33:06.612515   37075 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:06.612936   37075 main.go:141] libmachine: Using API Version  1
	I0829 18:33:06.612960   37075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:06.613215   37075 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:06.613416   37075 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:33:06.613556   37075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:06.613587   37075 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:33:06.616169   37075 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:06.616603   37075 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:33:06.616638   37075 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:06.616818   37075 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:33:06.616994   37075 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:33:06.617144   37075 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:33:06.617300   37075 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:33:06.692951   37075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:33:06.706982   37075 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:33:06.707009   37075 api_server.go:166] Checking apiserver status ...
	I0829 18:33:06.707042   37075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:33:06.721077   37075 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0829 18:33:06.730529   37075 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:33:06.730588   37075 ssh_runner.go:195] Run: ls
	I0829 18:33:06.735057   37075 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:33:06.739489   37075 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:33:06.739509   37075 status.go:422] ha-782425-m03 apiserver status = Running (err=<nil>)
	I0829 18:33:06.739519   37075 status.go:257] ha-782425-m03 status: &{Name:ha-782425-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:33:06.739537   37075 status.go:255] checking status of ha-782425-m04 ...
	I0829 18:33:06.739830   37075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:06.739869   37075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:06.754950   37075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38303
	I0829 18:33:06.755353   37075 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:06.755769   37075 main.go:141] libmachine: Using API Version  1
	I0829 18:33:06.755790   37075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:06.756056   37075 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:06.756216   37075 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:33:06.757618   37075 status.go:330] ha-782425-m04 host status = "Running" (err=<nil>)
	I0829 18:33:06.757632   37075 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:33:06.757889   37075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:06.757925   37075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:06.771695   37075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43757
	I0829 18:33:06.772069   37075 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:06.772510   37075 main.go:141] libmachine: Using API Version  1
	I0829 18:33:06.772537   37075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:06.772830   37075 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:06.773030   37075 main.go:141] libmachine: (ha-782425-m04) Calling .GetIP
	I0829 18:33:06.775654   37075 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:06.776034   37075 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:33:06.776059   37075 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:06.776214   37075 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:33:06.776527   37075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:06.776562   37075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:06.791221   37075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45155
	I0829 18:33:06.791596   37075 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:06.792019   37075 main.go:141] libmachine: Using API Version  1
	I0829 18:33:06.792038   37075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:06.792325   37075 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:06.792494   37075 main.go:141] libmachine: (ha-782425-m04) Calling .DriverName
	I0829 18:33:06.792666   37075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:06.792684   37075 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHHostname
	I0829 18:33:06.795123   37075 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:06.795552   37075 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:33:06.795581   37075 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:06.795722   37075 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHPort
	I0829 18:33:06.795869   37075 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHKeyPath
	I0829 18:33:06.795973   37075 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHUsername
	I0829 18:33:06.796061   37075 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m04/id_rsa Username:docker}
	I0829 18:33:06.877112   37075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:33:06.890931   37075 status.go:257] ha-782425-m04 status: &{Name:ha-782425-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr: exit status 7 (598.788581ms)

                                                
                                                
-- stdout --
	ha-782425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-782425-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:33:13.908221   37211 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:33:13.908333   37211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:33:13.908342   37211 out.go:358] Setting ErrFile to fd 2...
	I0829 18:33:13.908348   37211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:33:13.908510   37211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:33:13.908684   37211 out.go:352] Setting JSON to false
	I0829 18:33:13.908711   37211 mustload.go:65] Loading cluster: ha-782425
	I0829 18:33:13.908761   37211 notify.go:220] Checking for updates...
	I0829 18:33:13.909197   37211 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:33:13.909219   37211 status.go:255] checking status of ha-782425 ...
	I0829 18:33:13.909678   37211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:13.909729   37211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:13.928508   37211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40969
	I0829 18:33:13.928921   37211 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:13.929449   37211 main.go:141] libmachine: Using API Version  1
	I0829 18:33:13.929477   37211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:13.929804   37211 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:13.929992   37211 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:33:13.931897   37211 status.go:330] ha-782425 host status = "Running" (err=<nil>)
	I0829 18:33:13.931916   37211 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:33:13.932214   37211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:13.932257   37211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:13.947171   37211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43825
	I0829 18:33:13.947564   37211 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:13.948016   37211 main.go:141] libmachine: Using API Version  1
	I0829 18:33:13.948031   37211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:13.948330   37211 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:13.948536   37211 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:33:13.951776   37211 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:13.952186   37211 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:33:13.952212   37211 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:13.952310   37211 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:33:13.952669   37211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:13.952710   37211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:13.967723   37211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42101
	I0829 18:33:13.968102   37211 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:13.968547   37211 main.go:141] libmachine: Using API Version  1
	I0829 18:33:13.968563   37211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:13.968860   37211 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:13.969076   37211 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:33:13.969295   37211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:13.969334   37211 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:33:13.971962   37211 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:13.972327   37211 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:33:13.972350   37211 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:13.972531   37211 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:33:13.972704   37211 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:33:13.972862   37211 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:33:13.973008   37211 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:33:14.053676   37211 ssh_runner.go:195] Run: systemctl --version
	I0829 18:33:14.059981   37211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:33:14.075495   37211 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:33:14.075528   37211 api_server.go:166] Checking apiserver status ...
	I0829 18:33:14.075565   37211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:33:14.090498   37211 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup
	W0829 18:33:14.100772   37211 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:33:14.100829   37211 ssh_runner.go:195] Run: ls
	I0829 18:33:14.104990   37211 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:33:14.110808   37211 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:33:14.110833   37211 status.go:422] ha-782425 apiserver status = Running (err=<nil>)
	I0829 18:33:14.110842   37211 status.go:257] ha-782425 status: &{Name:ha-782425 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:33:14.110857   37211 status.go:255] checking status of ha-782425-m02 ...
	I0829 18:33:14.111187   37211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:14.111224   37211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:14.126913   37211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32947
	I0829 18:33:14.127423   37211 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:14.127963   37211 main.go:141] libmachine: Using API Version  1
	I0829 18:33:14.127986   37211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:14.128231   37211 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:14.128432   37211 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:33:14.130109   37211 status.go:330] ha-782425-m02 host status = "Stopped" (err=<nil>)
	I0829 18:33:14.130123   37211 status.go:343] host is not running, skipping remaining checks
	I0829 18:33:14.130129   37211 status.go:257] ha-782425-m02 status: &{Name:ha-782425-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:33:14.130176   37211 status.go:255] checking status of ha-782425-m03 ...
	I0829 18:33:14.130458   37211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:14.130488   37211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:14.145274   37211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0829 18:33:14.145787   37211 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:14.146327   37211 main.go:141] libmachine: Using API Version  1
	I0829 18:33:14.146350   37211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:14.146639   37211 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:14.146824   37211 main.go:141] libmachine: (ha-782425-m03) Calling .GetState
	I0829 18:33:14.148435   37211 status.go:330] ha-782425-m03 host status = "Running" (err=<nil>)
	I0829 18:33:14.148458   37211 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:33:14.148741   37211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:14.148772   37211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:14.164621   37211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0829 18:33:14.165174   37211 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:14.165746   37211 main.go:141] libmachine: Using API Version  1
	I0829 18:33:14.165774   37211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:14.166133   37211 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:14.166341   37211 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:33:14.168945   37211 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:14.169374   37211 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:33:14.169399   37211 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:14.169597   37211 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:33:14.169939   37211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:14.169977   37211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:14.186189   37211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39501
	I0829 18:33:14.186614   37211 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:14.187039   37211 main.go:141] libmachine: Using API Version  1
	I0829 18:33:14.187059   37211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:14.187407   37211 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:14.187600   37211 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:33:14.187773   37211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:14.187797   37211 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:33:14.190660   37211 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:14.191120   37211 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:33:14.191145   37211 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:14.191364   37211 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:33:14.191527   37211 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:33:14.191646   37211 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:33:14.191750   37211 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:33:14.268920   37211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:33:14.282366   37211 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:33:14.282391   37211 api_server.go:166] Checking apiserver status ...
	I0829 18:33:14.282425   37211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:33:14.296684   37211 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0829 18:33:14.306448   37211 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:33:14.306505   37211 ssh_runner.go:195] Run: ls
	I0829 18:33:14.310833   37211 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:33:14.315330   37211 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:33:14.315350   37211 status.go:422] ha-782425-m03 apiserver status = Running (err=<nil>)
	I0829 18:33:14.315357   37211 status.go:257] ha-782425-m03 status: &{Name:ha-782425-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:33:14.315374   37211 status.go:255] checking status of ha-782425-m04 ...
	I0829 18:33:14.315662   37211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:14.315691   37211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:14.330438   37211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34651
	I0829 18:33:14.330840   37211 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:14.331275   37211 main.go:141] libmachine: Using API Version  1
	I0829 18:33:14.331294   37211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:14.331571   37211 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:14.331768   37211 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:33:14.333291   37211 status.go:330] ha-782425-m04 host status = "Running" (err=<nil>)
	I0829 18:33:14.333308   37211 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:33:14.333585   37211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:14.333621   37211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:14.348565   37211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43339
	I0829 18:33:14.348951   37211 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:14.349378   37211 main.go:141] libmachine: Using API Version  1
	I0829 18:33:14.349396   37211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:14.349686   37211 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:14.349872   37211 main.go:141] libmachine: (ha-782425-m04) Calling .GetIP
	I0829 18:33:14.352634   37211 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:14.353047   37211 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:33:14.353086   37211 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:14.353230   37211 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:33:14.353677   37211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:14.353723   37211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:14.369258   37211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I0829 18:33:14.369683   37211 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:14.370209   37211 main.go:141] libmachine: Using API Version  1
	I0829 18:33:14.370231   37211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:14.370548   37211 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:14.370759   37211 main.go:141] libmachine: (ha-782425-m04) Calling .DriverName
	I0829 18:33:14.370956   37211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:14.370976   37211 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHHostname
	I0829 18:33:14.373341   37211 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:14.373740   37211 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:33:14.373765   37211 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:14.373922   37211 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHPort
	I0829 18:33:14.374121   37211 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHKeyPath
	I0829 18:33:14.374249   37211 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHUsername
	I0829 18:33:14.374417   37211 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m04/id_rsa Username:docker}
	I0829 18:33:14.452487   37211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:33:14.465138   37211 status.go:257] ha-782425-m04 status: &{Name:ha-782425-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr: exit status 7 (602.05516ms)

                                                
                                                
-- stdout --
	ha-782425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-782425-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:33:23.562485   37318 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:33:23.562724   37318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:33:23.562732   37318 out.go:358] Setting ErrFile to fd 2...
	I0829 18:33:23.562736   37318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:33:23.562922   37318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:33:23.563069   37318 out.go:352] Setting JSON to false
	I0829 18:33:23.563096   37318 mustload.go:65] Loading cluster: ha-782425
	I0829 18:33:23.563194   37318 notify.go:220] Checking for updates...
	I0829 18:33:23.563465   37318 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:33:23.563478   37318 status.go:255] checking status of ha-782425 ...
	I0829 18:33:23.563836   37318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:23.563887   37318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:23.583687   37318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40199
	I0829 18:33:23.584178   37318 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:23.584767   37318 main.go:141] libmachine: Using API Version  1
	I0829 18:33:23.584799   37318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:23.585191   37318 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:23.585381   37318 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:33:23.587265   37318 status.go:330] ha-782425 host status = "Running" (err=<nil>)
	I0829 18:33:23.587279   37318 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:33:23.587599   37318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:23.587646   37318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:23.602754   37318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42891
	I0829 18:33:23.603111   37318 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:23.603538   37318 main.go:141] libmachine: Using API Version  1
	I0829 18:33:23.603560   37318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:23.603879   37318 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:23.604050   37318 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:33:23.607099   37318 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:23.607486   37318 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:33:23.607512   37318 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:23.607621   37318 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:33:23.608005   37318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:23.608046   37318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:23.622715   37318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0829 18:33:23.623156   37318 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:23.623602   37318 main.go:141] libmachine: Using API Version  1
	I0829 18:33:23.623627   37318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:23.624209   37318 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:23.624398   37318 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:33:23.624580   37318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:23.624604   37318 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:33:23.627289   37318 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:23.627725   37318 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:33:23.627753   37318 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:23.627908   37318 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:33:23.628082   37318 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:33:23.628262   37318 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:33:23.628393   37318 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:33:23.709432   37318 ssh_runner.go:195] Run: systemctl --version
	I0829 18:33:23.715420   37318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:33:23.730991   37318 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:33:23.731023   37318 api_server.go:166] Checking apiserver status ...
	I0829 18:33:23.731060   37318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:33:23.744931   37318 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup
	W0829 18:33:23.754529   37318 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:33:23.754620   37318 ssh_runner.go:195] Run: ls
	I0829 18:33:23.759416   37318 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:33:23.766320   37318 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:33:23.766342   37318 status.go:422] ha-782425 apiserver status = Running (err=<nil>)
	I0829 18:33:23.766351   37318 status.go:257] ha-782425 status: &{Name:ha-782425 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:33:23.766370   37318 status.go:255] checking status of ha-782425-m02 ...
	I0829 18:33:23.766686   37318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:23.766718   37318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:23.781788   37318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44909
	I0829 18:33:23.782251   37318 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:23.782704   37318 main.go:141] libmachine: Using API Version  1
	I0829 18:33:23.782721   37318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:23.782991   37318 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:23.783158   37318 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:33:23.784610   37318 status.go:330] ha-782425-m02 host status = "Stopped" (err=<nil>)
	I0829 18:33:23.784626   37318 status.go:343] host is not running, skipping remaining checks
	I0829 18:33:23.784634   37318 status.go:257] ha-782425-m02 status: &{Name:ha-782425-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:33:23.784653   37318 status.go:255] checking status of ha-782425-m03 ...
	I0829 18:33:23.785044   37318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:23.785091   37318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:23.799628   37318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
	I0829 18:33:23.800045   37318 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:23.800493   37318 main.go:141] libmachine: Using API Version  1
	I0829 18:33:23.800514   37318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:23.800787   37318 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:23.801059   37318 main.go:141] libmachine: (ha-782425-m03) Calling .GetState
	I0829 18:33:23.802773   37318 status.go:330] ha-782425-m03 host status = "Running" (err=<nil>)
	I0829 18:33:23.802788   37318 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:33:23.803141   37318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:23.803180   37318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:23.817674   37318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38835
	I0829 18:33:23.818084   37318 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:23.818571   37318 main.go:141] libmachine: Using API Version  1
	I0829 18:33:23.818596   37318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:23.818887   37318 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:23.819056   37318 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:33:23.821780   37318 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:23.822217   37318 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:33:23.822233   37318 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:23.822442   37318 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:33:23.822811   37318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:23.822847   37318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:23.838264   37318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
	I0829 18:33:23.838701   37318 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:23.839201   37318 main.go:141] libmachine: Using API Version  1
	I0829 18:33:23.839225   37318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:23.839509   37318 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:23.839675   37318 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:33:23.839850   37318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:23.839896   37318 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:33:23.842451   37318 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:23.842868   37318 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:33:23.842891   37318 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:23.843025   37318 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:33:23.843185   37318 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:33:23.843329   37318 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:33:23.843468   37318 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:33:23.921353   37318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:33:23.936719   37318 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:33:23.936744   37318 api_server.go:166] Checking apiserver status ...
	I0829 18:33:23.936775   37318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:33:23.949384   37318 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0829 18:33:23.959633   37318 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:33:23.959684   37318 ssh_runner.go:195] Run: ls
	I0829 18:33:23.963989   37318 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:33:23.968184   37318 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:33:23.968205   37318 status.go:422] ha-782425-m03 apiserver status = Running (err=<nil>)
	I0829 18:33:23.968213   37318 status.go:257] ha-782425-m03 status: &{Name:ha-782425-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:33:23.968227   37318 status.go:255] checking status of ha-782425-m04 ...
	I0829 18:33:23.968540   37318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:23.968580   37318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:23.983616   37318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0829 18:33:23.984053   37318 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:23.984541   37318 main.go:141] libmachine: Using API Version  1
	I0829 18:33:23.984564   37318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:23.984859   37318 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:23.985042   37318 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:33:23.986548   37318 status.go:330] ha-782425-m04 host status = "Running" (err=<nil>)
	I0829 18:33:23.986563   37318 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:33:23.986904   37318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:23.986943   37318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:24.001990   37318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
	I0829 18:33:24.002434   37318 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:24.002882   37318 main.go:141] libmachine: Using API Version  1
	I0829 18:33:24.002912   37318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:24.003268   37318 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:24.003467   37318 main.go:141] libmachine: (ha-782425-m04) Calling .GetIP
	I0829 18:33:24.006483   37318 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:24.007036   37318 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:33:24.007070   37318 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:24.007226   37318 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:33:24.007519   37318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:24.007560   37318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:24.023057   37318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39925
	I0829 18:33:24.023441   37318 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:24.023897   37318 main.go:141] libmachine: Using API Version  1
	I0829 18:33:24.023924   37318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:24.024284   37318 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:24.024455   37318 main.go:141] libmachine: (ha-782425-m04) Calling .DriverName
	I0829 18:33:24.024639   37318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:24.024660   37318 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHHostname
	I0829 18:33:24.027422   37318 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:24.027809   37318 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:33:24.027834   37318 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:24.027966   37318 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHPort
	I0829 18:33:24.028076   37318 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHKeyPath
	I0829 18:33:24.028213   37318 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHUsername
	I0829 18:33:24.028318   37318 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m04/id_rsa Username:docker}
	I0829 18:33:24.108833   37318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:33:24.123051   37318 status.go:257] ha-782425-m04 status: &{Name:ha-782425-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0829 18:33:26.706326   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr: exit status 7 (598.381498ms)

                                                
                                                
-- stdout --
	ha-782425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-782425-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:33:30.056334   37407 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:33:30.056780   37407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:33:30.056798   37407 out.go:358] Setting ErrFile to fd 2...
	I0829 18:33:30.056806   37407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:33:30.057267   37407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:33:30.057707   37407 out.go:352] Setting JSON to false
	I0829 18:33:30.057743   37407 mustload.go:65] Loading cluster: ha-782425
	I0829 18:33:30.057842   37407 notify.go:220] Checking for updates...
	I0829 18:33:30.058242   37407 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:33:30.058263   37407 status.go:255] checking status of ha-782425 ...
	I0829 18:33:30.058718   37407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:30.058792   37407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:30.078477   37407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41775
	I0829 18:33:30.078906   37407 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:30.079475   37407 main.go:141] libmachine: Using API Version  1
	I0829 18:33:30.079494   37407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:30.079839   37407 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:30.080000   37407 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:33:30.081662   37407 status.go:330] ha-782425 host status = "Running" (err=<nil>)
	I0829 18:33:30.081677   37407 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:33:30.082035   37407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:30.082070   37407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:30.096536   37407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0829 18:33:30.096971   37407 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:30.097412   37407 main.go:141] libmachine: Using API Version  1
	I0829 18:33:30.097431   37407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:30.097819   37407 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:30.098035   37407 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:33:30.100880   37407 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:30.101349   37407 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:33:30.101385   37407 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:30.101494   37407 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:33:30.101807   37407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:30.101850   37407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:30.116013   37407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36931
	I0829 18:33:30.116387   37407 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:30.116852   37407 main.go:141] libmachine: Using API Version  1
	I0829 18:33:30.116871   37407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:30.117168   37407 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:30.117331   37407 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:33:30.117483   37407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:30.117515   37407 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:33:30.120291   37407 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:30.120672   37407 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:33:30.120695   37407 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:33:30.120837   37407 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:33:30.120992   37407 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:33:30.121152   37407 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:33:30.121307   37407 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:33:30.201418   37407 ssh_runner.go:195] Run: systemctl --version
	I0829 18:33:30.207824   37407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:33:30.223022   37407 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:33:30.223056   37407 api_server.go:166] Checking apiserver status ...
	I0829 18:33:30.223088   37407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:33:30.236432   37407 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup
	W0829 18:33:30.245312   37407 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:33:30.245379   37407 ssh_runner.go:195] Run: ls
	I0829 18:33:30.249487   37407 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:33:30.253618   37407 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:33:30.253639   37407 status.go:422] ha-782425 apiserver status = Running (err=<nil>)
	I0829 18:33:30.253648   37407 status.go:257] ha-782425 status: &{Name:ha-782425 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:33:30.253665   37407 status.go:255] checking status of ha-782425-m02 ...
	I0829 18:33:30.253956   37407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:30.253988   37407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:30.268939   37407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I0829 18:33:30.269308   37407 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:30.269753   37407 main.go:141] libmachine: Using API Version  1
	I0829 18:33:30.269772   37407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:30.270034   37407 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:30.270238   37407 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:33:30.271640   37407 status.go:330] ha-782425-m02 host status = "Stopped" (err=<nil>)
	I0829 18:33:30.271668   37407 status.go:343] host is not running, skipping remaining checks
	I0829 18:33:30.271677   37407 status.go:257] ha-782425-m02 status: &{Name:ha-782425-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:33:30.271696   37407 status.go:255] checking status of ha-782425-m03 ...
	I0829 18:33:30.271987   37407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:30.272018   37407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:30.286699   37407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37315
	I0829 18:33:30.287139   37407 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:30.287628   37407 main.go:141] libmachine: Using API Version  1
	I0829 18:33:30.287664   37407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:30.287946   37407 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:30.288089   37407 main.go:141] libmachine: (ha-782425-m03) Calling .GetState
	I0829 18:33:30.289802   37407 status.go:330] ha-782425-m03 host status = "Running" (err=<nil>)
	I0829 18:33:30.289819   37407 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:33:30.290198   37407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:30.290230   37407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:30.305546   37407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38301
	I0829 18:33:30.305932   37407 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:30.306436   37407 main.go:141] libmachine: Using API Version  1
	I0829 18:33:30.306464   37407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:30.306732   37407 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:30.306922   37407 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:33:30.309651   37407 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:30.310039   37407 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:33:30.310057   37407 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:30.310248   37407 host.go:66] Checking if "ha-782425-m03" exists ...
	I0829 18:33:30.310682   37407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:30.310727   37407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:30.325309   37407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I0829 18:33:30.325723   37407 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:30.326191   37407 main.go:141] libmachine: Using API Version  1
	I0829 18:33:30.326210   37407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:30.326504   37407 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:30.326668   37407 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:33:30.326848   37407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:30.326866   37407 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:33:30.329700   37407 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:30.330115   37407 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:33:30.330141   37407 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:30.330317   37407 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:33:30.330484   37407 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:33:30.330654   37407 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:33:30.330791   37407 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:33:30.410839   37407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:33:30.429664   37407 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:33:30.429694   37407 api_server.go:166] Checking apiserver status ...
	I0829 18:33:30.429741   37407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:33:30.445124   37407 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0829 18:33:30.455819   37407 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:33:30.455865   37407 ssh_runner.go:195] Run: ls
	I0829 18:33:30.460683   37407 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:33:30.464602   37407 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:33:30.464621   37407 status.go:422] ha-782425-m03 apiserver status = Running (err=<nil>)
	I0829 18:33:30.464629   37407 status.go:257] ha-782425-m03 status: &{Name:ha-782425-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:33:30.464644   37407 status.go:255] checking status of ha-782425-m04 ...
	I0829 18:33:30.465007   37407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:30.465051   37407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:30.480510   37407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41757
	I0829 18:33:30.480944   37407 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:30.481402   37407 main.go:141] libmachine: Using API Version  1
	I0829 18:33:30.481423   37407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:30.481704   37407 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:30.481878   37407 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:33:30.483601   37407 status.go:330] ha-782425-m04 host status = "Running" (err=<nil>)
	I0829 18:33:30.483628   37407 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:33:30.484030   37407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:30.484074   37407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:30.499564   37407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
	I0829 18:33:30.499932   37407 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:30.500398   37407 main.go:141] libmachine: Using API Version  1
	I0829 18:33:30.500419   37407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:30.500706   37407 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:30.500917   37407 main.go:141] libmachine: (ha-782425-m04) Calling .GetIP
	I0829 18:33:30.503522   37407 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:30.503938   37407 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:33:30.503958   37407 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:30.504112   37407 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:33:30.504420   37407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:30.504455   37407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:30.519000   37407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0829 18:33:30.519341   37407 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:30.519773   37407 main.go:141] libmachine: Using API Version  1
	I0829 18:33:30.519794   37407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:30.520125   37407 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:30.520330   37407 main.go:141] libmachine: (ha-782425-m04) Calling .DriverName
	I0829 18:33:30.520515   37407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:33:30.520535   37407 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHHostname
	I0829 18:33:30.522802   37407 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:30.523272   37407 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:33:30.523303   37407 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:30.523434   37407 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHPort
	I0829 18:33:30.523593   37407 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHKeyPath
	I0829 18:33:30.523737   37407 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHUsername
	I0829 18:33:30.523876   37407 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m04/id_rsa Username:docker}
	I0829 18:33:30.601771   37407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:33:30.615009   37407 status.go:257] ha-782425-m04 status: &{Name:ha-782425-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-782425 -n ha-782425
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-782425 logs -n 25: (1.325888123s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425:/home/docker/cp-test_ha-782425-m03_ha-782425.txt                       |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425 sudo cat                                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m03_ha-782425.txt                                 |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m02:/home/docker/cp-test_ha-782425-m03_ha-782425-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m02 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m03_ha-782425-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04:/home/docker/cp-test_ha-782425-m03_ha-782425-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m04 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m03_ha-782425-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-782425 cp testdata/cp-test.txt                                                | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1158605446/001/cp-test_ha-782425-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425:/home/docker/cp-test_ha-782425-m04_ha-782425.txt                       |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425 sudo cat                                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m04_ha-782425.txt                                 |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m02:/home/docker/cp-test_ha-782425-m04_ha-782425-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m02 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m04_ha-782425-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03:/home/docker/cp-test_ha-782425-m04_ha-782425-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m03 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m04_ha-782425-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-782425 node stop m02 -v=7                                                     | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-782425 node start m02 -v=7                                                    | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:25:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:25:37.867147   31894 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:25:37.867260   31894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:25:37.867269   31894 out.go:358] Setting ErrFile to fd 2...
	I0829 18:25:37.867277   31894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:25:37.867502   31894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:25:37.868071   31894 out.go:352] Setting JSON to false
	I0829 18:25:37.868905   31894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4085,"bootTime":1724951853,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:25:37.868962   31894 start.go:139] virtualization: kvm guest
	I0829 18:25:37.871126   31894 out.go:177] * [ha-782425] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:25:37.872509   31894 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:25:37.872500   31894 notify.go:220] Checking for updates...
	I0829 18:25:37.875147   31894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:25:37.876547   31894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:25:37.878107   31894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:25:37.879531   31894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:25:37.880985   31894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:25:37.882332   31894 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:25:37.917194   31894 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 18:25:37.918627   31894 start.go:297] selected driver: kvm2
	I0829 18:25:37.918643   31894 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:25:37.918658   31894 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:25:37.919635   31894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:25:37.919735   31894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:25:37.935215   31894 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:25:37.935265   31894 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:25:37.935474   31894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:25:37.935545   31894 cni.go:84] Creating CNI manager for ""
	I0829 18:25:37.935558   31894 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0829 18:25:37.935569   31894 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0829 18:25:37.935622   31894 start.go:340] cluster config:
	{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0829 18:25:37.935718   31894 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:25:37.937548   31894 out.go:177] * Starting "ha-782425" primary control-plane node in "ha-782425" cluster
	I0829 18:25:37.939035   31894 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:25:37.939074   31894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:25:37.939081   31894 cache.go:56] Caching tarball of preloaded images
	I0829 18:25:37.939168   31894 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:25:37.939182   31894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:25:37.939477   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:25:37.939502   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json: {Name:mkade95470e4316599e5e198e15c0eefeb7e120b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:25:37.939656   31894 start.go:360] acquireMachinesLock for ha-782425: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:25:37.939691   31894 start.go:364] duration metric: took 19.785µs to acquireMachinesLock for "ha-782425"
	I0829 18:25:37.939714   31894 start.go:93] Provisioning new machine with config: &{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:25:37.939768   31894 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 18:25:37.941384   31894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 18:25:37.941518   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:25:37.941565   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:25:37.956286   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38003
	I0829 18:25:37.956726   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:25:37.957245   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:25:37.957269   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:25:37.957718   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:25:37.957980   31894 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:25:37.958223   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:25:37.958368   31894 start.go:159] libmachine.API.Create for "ha-782425" (driver="kvm2")
	I0829 18:25:37.958398   31894 client.go:168] LocalClient.Create starting
	I0829 18:25:37.958429   31894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem
	I0829 18:25:37.958463   31894 main.go:141] libmachine: Decoding PEM data...
	I0829 18:25:37.958479   31894 main.go:141] libmachine: Parsing certificate...
	I0829 18:25:37.958536   31894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem
	I0829 18:25:37.958557   31894 main.go:141] libmachine: Decoding PEM data...
	I0829 18:25:37.958571   31894 main.go:141] libmachine: Parsing certificate...
	I0829 18:25:37.958586   31894 main.go:141] libmachine: Running pre-create checks...
	I0829 18:25:37.958598   31894 main.go:141] libmachine: (ha-782425) Calling .PreCreateCheck
	I0829 18:25:37.958967   31894 main.go:141] libmachine: (ha-782425) Calling .GetConfigRaw
	I0829 18:25:37.959311   31894 main.go:141] libmachine: Creating machine...
	I0829 18:25:37.959322   31894 main.go:141] libmachine: (ha-782425) Calling .Create
	I0829 18:25:37.959446   31894 main.go:141] libmachine: (ha-782425) Creating KVM machine...
	I0829 18:25:37.960839   31894 main.go:141] libmachine: (ha-782425) DBG | found existing default KVM network
	I0829 18:25:37.961520   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:37.961409   31917 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0829 18:25:37.961584   31894 main.go:141] libmachine: (ha-782425) DBG | created network xml: 
	I0829 18:25:37.961607   31894 main.go:141] libmachine: (ha-782425) DBG | <network>
	I0829 18:25:37.961636   31894 main.go:141] libmachine: (ha-782425) DBG |   <name>mk-ha-782425</name>
	I0829 18:25:37.961660   31894 main.go:141] libmachine: (ha-782425) DBG |   <dns enable='no'/>
	I0829 18:25:37.961680   31894 main.go:141] libmachine: (ha-782425) DBG |   
	I0829 18:25:37.961702   31894 main.go:141] libmachine: (ha-782425) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0829 18:25:37.961711   31894 main.go:141] libmachine: (ha-782425) DBG |     <dhcp>
	I0829 18:25:37.961719   31894 main.go:141] libmachine: (ha-782425) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0829 18:25:37.961731   31894 main.go:141] libmachine: (ha-782425) DBG |     </dhcp>
	I0829 18:25:37.961741   31894 main.go:141] libmachine: (ha-782425) DBG |   </ip>
	I0829 18:25:37.961746   31894 main.go:141] libmachine: (ha-782425) DBG |   
	I0829 18:25:37.961753   31894 main.go:141] libmachine: (ha-782425) DBG | </network>
	I0829 18:25:37.961771   31894 main.go:141] libmachine: (ha-782425) DBG | 
	I0829 18:25:37.967100   31894 main.go:141] libmachine: (ha-782425) DBG | trying to create private KVM network mk-ha-782425 192.168.39.0/24...
	I0829 18:25:38.030569   31894 main.go:141] libmachine: (ha-782425) Setting up store path in /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425 ...
	I0829 18:25:38.030600   31894 main.go:141] libmachine: (ha-782425) DBG | private KVM network mk-ha-782425 192.168.39.0/24 created
	I0829 18:25:38.030613   31894 main.go:141] libmachine: (ha-782425) Building disk image from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 18:25:38.030663   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:38.030518   31917 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:25:38.030698   31894 main.go:141] libmachine: (ha-782425) Downloading /home/jenkins/minikube-integration/19531-13056/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 18:25:38.292972   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:38.292825   31917 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa...
	I0829 18:25:38.429095   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:38.428945   31917 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/ha-782425.rawdisk...
	I0829 18:25:38.429117   31894 main.go:141] libmachine: (ha-782425) DBG | Writing magic tar header
	I0829 18:25:38.429154   31894 main.go:141] libmachine: (ha-782425) DBG | Writing SSH key tar header
	I0829 18:25:38.429201   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:38.429059   31917 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425 ...
	I0829 18:25:38.429233   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425
	I0829 18:25:38.429251   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines
	I0829 18:25:38.429261   31894 main.go:141] libmachine: (ha-782425) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425 (perms=drwx------)
	I0829 18:25:38.429269   31894 main.go:141] libmachine: (ha-782425) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:25:38.429276   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:25:38.429290   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056
	I0829 18:25:38.429301   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:25:38.429313   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:25:38.429324   31894 main.go:141] libmachine: (ha-782425) DBG | Checking permissions on dir: /home
	I0829 18:25:38.429333   31894 main.go:141] libmachine: (ha-782425) DBG | Skipping /home - not owner
	I0829 18:25:38.429342   31894 main.go:141] libmachine: (ha-782425) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube (perms=drwxr-xr-x)
	I0829 18:25:38.429360   31894 main.go:141] libmachine: (ha-782425) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056 (perms=drwxrwxr-x)
	I0829 18:25:38.429382   31894 main.go:141] libmachine: (ha-782425) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:25:38.429394   31894 main.go:141] libmachine: (ha-782425) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:25:38.429402   31894 main.go:141] libmachine: (ha-782425) Creating domain...
	I0829 18:25:38.430469   31894 main.go:141] libmachine: (ha-782425) define libvirt domain using xml: 
	I0829 18:25:38.430485   31894 main.go:141] libmachine: (ha-782425) <domain type='kvm'>
	I0829 18:25:38.430495   31894 main.go:141] libmachine: (ha-782425)   <name>ha-782425</name>
	I0829 18:25:38.430503   31894 main.go:141] libmachine: (ha-782425)   <memory unit='MiB'>2200</memory>
	I0829 18:25:38.430512   31894 main.go:141] libmachine: (ha-782425)   <vcpu>2</vcpu>
	I0829 18:25:38.430518   31894 main.go:141] libmachine: (ha-782425)   <features>
	I0829 18:25:38.430526   31894 main.go:141] libmachine: (ha-782425)     <acpi/>
	I0829 18:25:38.430534   31894 main.go:141] libmachine: (ha-782425)     <apic/>
	I0829 18:25:38.430543   31894 main.go:141] libmachine: (ha-782425)     <pae/>
	I0829 18:25:38.430563   31894 main.go:141] libmachine: (ha-782425)     
	I0829 18:25:38.430569   31894 main.go:141] libmachine: (ha-782425)   </features>
	I0829 18:25:38.430575   31894 main.go:141] libmachine: (ha-782425)   <cpu mode='host-passthrough'>
	I0829 18:25:38.430580   31894 main.go:141] libmachine: (ha-782425)   
	I0829 18:25:38.430584   31894 main.go:141] libmachine: (ha-782425)   </cpu>
	I0829 18:25:38.430589   31894 main.go:141] libmachine: (ha-782425)   <os>
	I0829 18:25:38.430593   31894 main.go:141] libmachine: (ha-782425)     <type>hvm</type>
	I0829 18:25:38.430607   31894 main.go:141] libmachine: (ha-782425)     <boot dev='cdrom'/>
	I0829 18:25:38.430611   31894 main.go:141] libmachine: (ha-782425)     <boot dev='hd'/>
	I0829 18:25:38.430618   31894 main.go:141] libmachine: (ha-782425)     <bootmenu enable='no'/>
	I0829 18:25:38.430629   31894 main.go:141] libmachine: (ha-782425)   </os>
	I0829 18:25:38.430636   31894 main.go:141] libmachine: (ha-782425)   <devices>
	I0829 18:25:38.430642   31894 main.go:141] libmachine: (ha-782425)     <disk type='file' device='cdrom'>
	I0829 18:25:38.430651   31894 main.go:141] libmachine: (ha-782425)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/boot2docker.iso'/>
	I0829 18:25:38.430662   31894 main.go:141] libmachine: (ha-782425)       <target dev='hdc' bus='scsi'/>
	I0829 18:25:38.430686   31894 main.go:141] libmachine: (ha-782425)       <readonly/>
	I0829 18:25:38.430705   31894 main.go:141] libmachine: (ha-782425)     </disk>
	I0829 18:25:38.430720   31894 main.go:141] libmachine: (ha-782425)     <disk type='file' device='disk'>
	I0829 18:25:38.430736   31894 main.go:141] libmachine: (ha-782425)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:25:38.430771   31894 main.go:141] libmachine: (ha-782425)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/ha-782425.rawdisk'/>
	I0829 18:25:38.430784   31894 main.go:141] libmachine: (ha-782425)       <target dev='hda' bus='virtio'/>
	I0829 18:25:38.430793   31894 main.go:141] libmachine: (ha-782425)     </disk>
	I0829 18:25:38.430804   31894 main.go:141] libmachine: (ha-782425)     <interface type='network'>
	I0829 18:25:38.430835   31894 main.go:141] libmachine: (ha-782425)       <source network='mk-ha-782425'/>
	I0829 18:25:38.430856   31894 main.go:141] libmachine: (ha-782425)       <model type='virtio'/>
	I0829 18:25:38.430870   31894 main.go:141] libmachine: (ha-782425)     </interface>
	I0829 18:25:38.430884   31894 main.go:141] libmachine: (ha-782425)     <interface type='network'>
	I0829 18:25:38.430903   31894 main.go:141] libmachine: (ha-782425)       <source network='default'/>
	I0829 18:25:38.430921   31894 main.go:141] libmachine: (ha-782425)       <model type='virtio'/>
	I0829 18:25:38.430934   31894 main.go:141] libmachine: (ha-782425)     </interface>
	I0829 18:25:38.430944   31894 main.go:141] libmachine: (ha-782425)     <serial type='pty'>
	I0829 18:25:38.430955   31894 main.go:141] libmachine: (ha-782425)       <target port='0'/>
	I0829 18:25:38.430965   31894 main.go:141] libmachine: (ha-782425)     </serial>
	I0829 18:25:38.430976   31894 main.go:141] libmachine: (ha-782425)     <console type='pty'>
	I0829 18:25:38.430985   31894 main.go:141] libmachine: (ha-782425)       <target type='serial' port='0'/>
	I0829 18:25:38.431010   31894 main.go:141] libmachine: (ha-782425)     </console>
	I0829 18:25:38.431027   31894 main.go:141] libmachine: (ha-782425)     <rng model='virtio'>
	I0829 18:25:38.431039   31894 main.go:141] libmachine: (ha-782425)       <backend model='random'>/dev/random</backend>
	I0829 18:25:38.431048   31894 main.go:141] libmachine: (ha-782425)     </rng>
	I0829 18:25:38.431058   31894 main.go:141] libmachine: (ha-782425)     
	I0829 18:25:38.431067   31894 main.go:141] libmachine: (ha-782425)     
	I0829 18:25:38.431077   31894 main.go:141] libmachine: (ha-782425)   </devices>
	I0829 18:25:38.431086   31894 main.go:141] libmachine: (ha-782425) </domain>
	I0829 18:25:38.431110   31894 main.go:141] libmachine: (ha-782425) 
	I0829 18:25:38.435249   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:47:48:9b in network default
	I0829 18:25:38.435805   31894 main.go:141] libmachine: (ha-782425) Ensuring networks are active...
	I0829 18:25:38.435822   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:38.436526   31894 main.go:141] libmachine: (ha-782425) Ensuring network default is active
	I0829 18:25:38.436895   31894 main.go:141] libmachine: (ha-782425) Ensuring network mk-ha-782425 is active
	I0829 18:25:38.437417   31894 main.go:141] libmachine: (ha-782425) Getting domain xml...
	I0829 18:25:38.438296   31894 main.go:141] libmachine: (ha-782425) Creating domain...
	I0829 18:25:39.612755   31894 main.go:141] libmachine: (ha-782425) Waiting to get IP...
	I0829 18:25:39.613519   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:39.613932   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:39.613969   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:39.613908   31917 retry.go:31] will retry after 252.54956ms: waiting for machine to come up
	I0829 18:25:39.868393   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:39.868798   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:39.868825   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:39.868768   31917 retry.go:31] will retry after 318.299028ms: waiting for machine to come up
	I0829 18:25:40.188369   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:40.188837   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:40.188860   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:40.188786   31917 retry.go:31] will retry after 363.788273ms: waiting for machine to come up
	I0829 18:25:40.554528   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:40.554973   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:40.555001   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:40.554931   31917 retry.go:31] will retry after 455.656451ms: waiting for machine to come up
	I0829 18:25:41.012838   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:41.013254   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:41.013285   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:41.013209   31917 retry.go:31] will retry after 583.854313ms: waiting for machine to come up
	I0829 18:25:41.600776   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:41.601286   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:41.601323   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:41.601203   31917 retry.go:31] will retry after 720.267915ms: waiting for machine to come up
	I0829 18:25:42.323178   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:42.323693   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:42.323734   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:42.323624   31917 retry.go:31] will retry after 989.211909ms: waiting for machine to come up
	I0829 18:25:43.314724   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:43.315093   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:43.315119   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:43.315058   31917 retry.go:31] will retry after 1.144448467s: waiting for machine to come up
	I0829 18:25:44.461273   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:44.461690   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:44.461709   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:44.461657   31917 retry.go:31] will retry after 1.158642835s: waiting for machine to come up
	I0829 18:25:45.621905   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:45.622358   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:45.622391   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:45.622320   31917 retry.go:31] will retry after 1.998708112s: waiting for machine to come up
	I0829 18:25:47.622185   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:47.622780   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:47.622811   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:47.622722   31917 retry.go:31] will retry after 2.004091072s: waiting for machine to come up
	I0829 18:25:49.628964   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:49.629575   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:49.629605   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:49.629546   31917 retry.go:31] will retry after 2.529906337s: waiting for machine to come up
	I0829 18:25:52.160611   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:52.160895   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:52.160912   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:52.160852   31917 retry.go:31] will retry after 3.940258303s: waiting for machine to come up
	I0829 18:25:56.104431   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:25:56.104936   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find current IP address of domain ha-782425 in network mk-ha-782425
	I0829 18:25:56.104960   31894 main.go:141] libmachine: (ha-782425) DBG | I0829 18:25:56.104888   31917 retry.go:31] will retry after 4.177118538s: waiting for machine to come up
	I0829 18:26:00.285123   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.285741   31894 main.go:141] libmachine: (ha-782425) Found IP for machine: 192.168.39.39
	I0829 18:26:00.285766   31894 main.go:141] libmachine: (ha-782425) Reserving static IP address...
	I0829 18:26:00.285780   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has current primary IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.286236   31894 main.go:141] libmachine: (ha-782425) DBG | unable to find host DHCP lease matching {name: "ha-782425", mac: "52:54:00:4e:37:dc", ip: "192.168.39.39"} in network mk-ha-782425
	I0829 18:26:00.355403   31894 main.go:141] libmachine: (ha-782425) DBG | Getting to WaitForSSH function...
	I0829 18:26:00.355449   31894 main.go:141] libmachine: (ha-782425) Reserved static IP address: 192.168.39.39
	I0829 18:26:00.355463   31894 main.go:141] libmachine: (ha-782425) Waiting for SSH to be available...
	I0829 18:26:00.357630   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.358018   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.358048   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.358192   31894 main.go:141] libmachine: (ha-782425) DBG | Using SSH client type: external
	I0829 18:26:00.358218   31894 main.go:141] libmachine: (ha-782425) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa (-rw-------)
	I0829 18:26:00.358247   31894 main.go:141] libmachine: (ha-782425) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:26:00.358255   31894 main.go:141] libmachine: (ha-782425) DBG | About to run SSH command:
	I0829 18:26:00.358268   31894 main.go:141] libmachine: (ha-782425) DBG | exit 0
	I0829 18:26:00.482401   31894 main.go:141] libmachine: (ha-782425) DBG | SSH cmd err, output: <nil>: 
	I0829 18:26:00.482690   31894 main.go:141] libmachine: (ha-782425) KVM machine creation complete!
	I0829 18:26:00.482969   31894 main.go:141] libmachine: (ha-782425) Calling .GetConfigRaw
	I0829 18:26:00.483536   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:00.483778   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:00.483936   31894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:26:00.483954   31894 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:26:00.485260   31894 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:26:00.485278   31894 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:26:00.485285   31894 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:26:00.485291   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:00.488046   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.488395   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.488429   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.488606   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:00.488780   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.488949   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.489085   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:00.489274   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:00.489560   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:26:00.489578   31894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:26:00.597339   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:26:00.597364   31894 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:26:00.597377   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:00.599767   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.600124   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.600160   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.600321   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:00.600521   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.600663   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.600777   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:00.600956   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:00.601126   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:26:00.601136   31894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:26:00.710649   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:26:00.710712   31894 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:26:00.710721   31894 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:26:00.710728   31894 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:26:00.710947   31894 buildroot.go:166] provisioning hostname "ha-782425"
	I0829 18:26:00.710971   31894 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:26:00.711148   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:00.713696   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.714073   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.714112   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.714296   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:00.714511   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.714635   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.714753   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:00.714909   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:00.715079   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:26:00.715092   31894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-782425 && echo "ha-782425" | sudo tee /etc/hostname
	I0829 18:26:00.836970   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-782425
	
	I0829 18:26:00.836997   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:00.839997   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.840367   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.840400   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.840531   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:00.840729   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.840872   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:00.841037   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:00.841202   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:00.841416   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:26:00.841439   31894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-782425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-782425/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-782425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:26:00.958497   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:26:00.958521   31894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:26:00.958538   31894 buildroot.go:174] setting up certificates
	I0829 18:26:00.958547   31894 provision.go:84] configureAuth start
	I0829 18:26:00.958555   31894 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:26:00.958866   31894 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:26:00.961597   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.961805   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.961838   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.961942   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:00.963894   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.964151   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:00.964173   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:00.964277   31894 provision.go:143] copyHostCerts
	I0829 18:26:00.964308   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:26:00.964351   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 18:26:00.964366   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:26:00.964470   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:26:00.964554   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:26:00.964575   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 18:26:00.964582   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:26:00.964616   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:26:00.964664   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:26:00.964680   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 18:26:00.964686   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:26:00.964708   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:26:00.964750   31894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.ha-782425 san=[127.0.0.1 192.168.39.39 ha-782425 localhost minikube]
	I0829 18:26:01.079246   31894 provision.go:177] copyRemoteCerts
	I0829 18:26:01.079300   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:26:01.079331   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:01.081792   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.082106   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.082137   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.082301   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:01.082509   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.082691   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:01.082835   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:01.167913   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 18:26:01.167997   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:26:01.191043   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 18:26:01.191129   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0829 18:26:01.212920   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 18:26:01.212985   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:26:01.234244   31894 provision.go:87] duration metric: took 275.684593ms to configureAuth
	I0829 18:26:01.234275   31894 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:26:01.234479   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:26:01.234567   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:01.237125   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.237462   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.237489   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.237630   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:01.237817   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.237969   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.238110   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:01.238249   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:01.238407   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:26:01.238428   31894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:26:01.455620   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:26:01.455656   31894 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:26:01.455669   31894 main.go:141] libmachine: (ha-782425) Calling .GetURL
	I0829 18:26:01.456811   31894 main.go:141] libmachine: (ha-782425) DBG | Using libvirt version 6000000
	I0829 18:26:01.458787   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.459127   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.459168   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.459267   31894 main.go:141] libmachine: Docker is up and running!
	I0829 18:26:01.459279   31894 main.go:141] libmachine: Reticulating splines...
	I0829 18:26:01.459287   31894 client.go:171] duration metric: took 23.500881314s to LocalClient.Create
	I0829 18:26:01.459310   31894 start.go:167] duration metric: took 23.500942151s to libmachine.API.Create "ha-782425"
	I0829 18:26:01.459322   31894 start.go:293] postStartSetup for "ha-782425" (driver="kvm2")
	I0829 18:26:01.459334   31894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:26:01.459367   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:01.459573   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:26:01.459592   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:01.461877   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.462212   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.462240   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.462383   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:01.462557   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.462739   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:01.462879   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:01.544073   31894 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:26:01.548167   31894 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:26:01.548200   31894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 18:26:01.548274   31894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 18:26:01.548369   31894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 18:26:01.548381   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /etc/ssl/certs/202592.pem
	I0829 18:26:01.548478   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 18:26:01.557256   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:26:01.580305   31894 start.go:296] duration metric: took 120.971682ms for postStartSetup
	I0829 18:26:01.580348   31894 main.go:141] libmachine: (ha-782425) Calling .GetConfigRaw
	I0829 18:26:01.581010   31894 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:26:01.583449   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.583718   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.583746   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.583986   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:26:01.584164   31894 start.go:128] duration metric: took 23.644387848s to createHost
	I0829 18:26:01.584186   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:01.586374   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.586698   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.586716   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.586871   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:01.587039   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.587184   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.587318   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:01.587436   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:01.587606   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:26:01.587633   31894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:26:01.694987   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724955961.672763996
	
	I0829 18:26:01.695008   31894 fix.go:216] guest clock: 1724955961.672763996
	I0829 18:26:01.695015   31894 fix.go:229] Guest: 2024-08-29 18:26:01.672763996 +0000 UTC Remote: 2024-08-29 18:26:01.584176103 +0000 UTC m=+23.752171628 (delta=88.587893ms)
	I0829 18:26:01.695034   31894 fix.go:200] guest clock delta is within tolerance: 88.587893ms
	I0829 18:26:01.695040   31894 start.go:83] releasing machines lock for "ha-782425", held for 23.755337443s
	I0829 18:26:01.695060   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:01.695287   31894 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:26:01.697859   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.698352   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.698387   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.698459   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:01.698952   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:01.699131   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:01.699237   31894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:26:01.699273   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:01.699355   31894 ssh_runner.go:195] Run: cat /version.json
	I0829 18:26:01.699380   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:01.702040   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.702401   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.702441   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.702462   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.702696   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:01.702899   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.702950   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:01.702975   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:01.703075   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:01.703102   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:01.703245   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:01.703261   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:01.703470   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:01.703601   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:01.782897   31894 ssh_runner.go:195] Run: systemctl --version
	I0829 18:26:01.815514   31894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:26:01.970702   31894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:26:01.976178   31894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:26:01.976233   31894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:26:01.992238   31894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:26:01.992258   31894 start.go:495] detecting cgroup driver to use...
	I0829 18:26:01.992312   31894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:26:02.008342   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:26:02.021835   31894 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:26:02.021905   31894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:26:02.035185   31894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:26:02.048429   31894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:26:02.156392   31894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:26:02.294402   31894 docker.go:233] disabling docker service ...
	I0829 18:26:02.294462   31894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:26:02.308389   31894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:26:02.320832   31894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:26:02.459717   31894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:26:02.580176   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:26:02.595527   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:26:02.613403   31894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:26:02.613464   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.623157   31894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:26:02.623243   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.632952   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.642439   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.652287   31894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:26:02.662209   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.672069   31894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.688368   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:02.698250   31894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:26:02.707460   31894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 18:26:02.707504   31894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 18:26:02.720479   31894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:26:02.729874   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:26:02.852411   31894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:26:02.938754   31894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:26:02.938815   31894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:26:02.943380   31894 start.go:563] Will wait 60s for crictl version
	I0829 18:26:02.943425   31894 ssh_runner.go:195] Run: which crictl
	I0829 18:26:02.946880   31894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:26:02.984261   31894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:26:02.984338   31894 ssh_runner.go:195] Run: crio --version
	I0829 18:26:03.010616   31894 ssh_runner.go:195] Run: crio --version
	I0829 18:26:03.039162   31894 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:26:03.040233   31894 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:26:03.043179   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:03.043479   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:03.043495   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:03.043704   31894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:26:03.047399   31894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:26:03.059113   31894 kubeadm.go:883] updating cluster {Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:26:03.059203   31894 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:26:03.059244   31894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:26:03.087934   31894 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 18:26:03.087991   31894 ssh_runner.go:195] Run: which lz4
	I0829 18:26:03.091491   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0829 18:26:03.091573   31894 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 18:26:03.095120   31894 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 18:26:03.095146   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 18:26:04.293568   31894 crio.go:462] duration metric: took 1.202015488s to copy over tarball
	I0829 18:26:04.293653   31894 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 18:26:06.284728   31894 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.991042195s)
	I0829 18:26:06.284762   31894 crio.go:469] duration metric: took 1.991160188s to extract the tarball
	I0829 18:26:06.284772   31894 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 18:26:06.320353   31894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:26:06.363216   31894 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:26:06.363244   31894 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:26:06.363255   31894 kubeadm.go:934] updating node { 192.168.39.39 8443 v1.31.0 crio true true} ...
	I0829 18:26:06.363371   31894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-782425 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:26:06.363438   31894 ssh_runner.go:195] Run: crio config
	I0829 18:26:06.406168   31894 cni.go:84] Creating CNI manager for ""
	I0829 18:26:06.406186   31894 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0829 18:26:06.406198   31894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:26:06.406219   31894 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.39 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-782425 NodeName:ha-782425 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:26:06.406378   31894 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-782425"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:26:06.406401   31894 kube-vip.go:115] generating kube-vip config ...
	I0829 18:26:06.406463   31894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 18:26:06.424445   31894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 18:26:06.424554   31894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0829 18:26:06.424617   31894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:26:06.434031   31894 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:26:06.434123   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0829 18:26:06.442976   31894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0829 18:26:06.458034   31894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:26:06.473075   31894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0829 18:26:06.488549   31894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0829 18:26:06.503336   31894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 18:26:06.506900   31894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:26:06.517900   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:26:06.640996   31894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:26:06.657546   31894 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425 for IP: 192.168.39.39
	I0829 18:26:06.657574   31894 certs.go:194] generating shared ca certs ...
	I0829 18:26:06.657594   31894 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:06.657779   31894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 18:26:06.657829   31894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 18:26:06.657843   31894 certs.go:256] generating profile certs ...
	I0829 18:26:06.657908   31894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key
	I0829 18:26:06.657926   31894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.crt with IP's: []
	I0829 18:26:06.833897   31894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.crt ...
	I0829 18:26:06.833920   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.crt: {Name:mk803862989d3014c3f0f9b504b3f02d49baada0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:06.834075   31894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key ...
	I0829 18:26:06.834084   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key: {Name:mk7300df711cd15668d6488958571b6b4b07bc70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:06.834174   31894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.ed268bd7
	I0829 18:26:06.834189   31894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.ed268bd7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.39 192.168.39.254]
	I0829 18:26:07.101989   31894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.ed268bd7 ...
	I0829 18:26:07.102023   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.ed268bd7: {Name:mk00951deaf96cd75f54dbd1e69bfc47cc7fc9fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:07.102207   31894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.ed268bd7 ...
	I0829 18:26:07.102224   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.ed268bd7: {Name:mk268bf097f2f487c3ef925c05ee57a582c2559a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:07.102294   31894 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.ed268bd7 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt
	I0829 18:26:07.102389   31894 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.ed268bd7 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key
	I0829 18:26:07.102443   31894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key
	I0829 18:26:07.102461   31894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt with IP's: []
	I0829 18:26:07.181496   31894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt ...
	I0829 18:26:07.181527   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt: {Name:mk24182090946f9eb12d50db2a2a78f43a4dcb2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:07.181673   31894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key ...
	I0829 18:26:07.181691   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key: {Name:mk68143175544f4e4e481f32b6e72cda322b8ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:07.181760   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 18:26:07.181776   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 18:26:07.181787   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 18:26:07.181798   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 18:26:07.181808   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 18:26:07.181818   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 18:26:07.181828   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 18:26:07.181840   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 18:26:07.181906   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 18:26:07.181940   31894 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 18:26:07.181949   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:26:07.181971   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:26:07.182008   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:26:07.182034   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 18:26:07.182080   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:26:07.182135   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /usr/share/ca-certificates/202592.pem
	I0829 18:26:07.182155   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:07.182168   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem -> /usr/share/ca-certificates/20259.pem
	I0829 18:26:07.182765   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:26:07.207474   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:26:07.230113   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:26:07.252672   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:26:07.275435   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 18:26:07.297843   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:26:07.321190   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:26:07.344146   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:26:07.366687   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 18:26:07.388171   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:26:07.412637   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 18:26:07.445470   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:26:07.463183   31894 ssh_runner.go:195] Run: openssl version
	I0829 18:26:07.469735   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 18:26:07.480017   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 18:26:07.484182   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 18:26:07.484241   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 18:26:07.489548   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 18:26:07.499332   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:26:07.508783   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:07.512801   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:07.512857   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:07.517956   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:26:07.527522   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 18:26:07.537444   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 18:26:07.541397   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 18:26:07.541458   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 18:26:07.546721   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 18:26:07.556751   31894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:26:07.560526   31894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:26:07.560589   31894 kubeadm.go:392] StartCluster: {Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:26:07.560682   31894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:26:07.560723   31894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:26:07.597019   31894 cri.go:89] found id: ""
	I0829 18:26:07.597103   31894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:26:07.606350   31894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:26:07.614722   31894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:26:07.622807   31894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:26:07.622826   31894 kubeadm.go:157] found existing configuration files:
	
	I0829 18:26:07.622875   31894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:26:07.630502   31894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:26:07.630544   31894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:26:07.638605   31894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:26:07.646170   31894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:26:07.646238   31894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:26:07.654851   31894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:26:07.662865   31894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:26:07.662908   31894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:26:07.671205   31894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:26:07.678975   31894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:26:07.679023   31894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:26:07.687174   31894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 18:26:07.783868   31894 kubeadm.go:310] W0829 18:26:07.767739     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:26:07.784465   31894 kubeadm.go:310] W0829 18:26:07.768529     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:26:07.878060   31894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:26:22.425502   31894 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:26:22.425613   31894 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:26:22.425713   31894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:26:22.425846   31894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:26:22.425968   31894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:26:22.426044   31894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:26:22.427617   31894 out.go:235]   - Generating certificates and keys ...
	I0829 18:26:22.427712   31894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:26:22.427808   31894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:26:22.427918   31894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:26:22.427987   31894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:26:22.428070   31894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:26:22.428141   31894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:26:22.428218   31894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:26:22.428391   31894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-782425 localhost] and IPs [192.168.39.39 127.0.0.1 ::1]
	I0829 18:26:22.428472   31894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:26:22.428606   31894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-782425 localhost] and IPs [192.168.39.39 127.0.0.1 ::1]
	I0829 18:26:22.428714   31894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:26:22.428813   31894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:26:22.428877   31894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:26:22.428959   31894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:26:22.429032   31894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:26:22.429113   31894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:26:22.429194   31894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:26:22.429280   31894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:26:22.429331   31894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:26:22.429411   31894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:26:22.429473   31894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:26:22.432035   31894 out.go:235]   - Booting up control plane ...
	I0829 18:26:22.432159   31894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:26:22.432261   31894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:26:22.432370   31894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:26:22.432499   31894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:26:22.432608   31894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:26:22.432652   31894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:26:22.432768   31894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:26:22.432865   31894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:26:22.432920   31894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001203498s
	I0829 18:26:22.432975   31894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:26:22.433020   31894 kubeadm.go:310] [api-check] The API server is healthy after 8.980651426s
	I0829 18:26:22.433105   31894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:26:22.433216   31894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:26:22.433291   31894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:26:22.433463   31894 kubeadm.go:310] [mark-control-plane] Marking the node ha-782425 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:26:22.433524   31894 kubeadm.go:310] [bootstrap-token] Using token: hmug4n.uc0tr7mprzanzx0o
	I0829 18:26:22.434804   31894 out.go:235]   - Configuring RBAC rules ...
	I0829 18:26:22.434891   31894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:26:22.434959   31894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:26:22.435087   31894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:26:22.435209   31894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:26:22.435319   31894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:26:22.435429   31894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:26:22.435527   31894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:26:22.435600   31894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:26:22.435671   31894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:26:22.435680   31894 kubeadm.go:310] 
	I0829 18:26:22.435763   31894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:26:22.435771   31894 kubeadm.go:310] 
	I0829 18:26:22.435847   31894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:26:22.435853   31894 kubeadm.go:310] 
	I0829 18:26:22.435874   31894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:26:22.435927   31894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:26:22.435978   31894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:26:22.435985   31894 kubeadm.go:310] 
	I0829 18:26:22.436043   31894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:26:22.436054   31894 kubeadm.go:310] 
	I0829 18:26:22.436089   31894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:26:22.436095   31894 kubeadm.go:310] 
	I0829 18:26:22.436134   31894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:26:22.436226   31894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:26:22.436328   31894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:26:22.436337   31894 kubeadm.go:310] 
	I0829 18:26:22.436444   31894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:26:22.436543   31894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:26:22.436555   31894 kubeadm.go:310] 
	I0829 18:26:22.436656   31894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hmug4n.uc0tr7mprzanzx0o \
	I0829 18:26:22.436752   31894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 18:26:22.436774   31894 kubeadm.go:310] 	--control-plane 
	I0829 18:26:22.436778   31894 kubeadm.go:310] 
	I0829 18:26:22.436845   31894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:26:22.436852   31894 kubeadm.go:310] 
	I0829 18:26:22.436922   31894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hmug4n.uc0tr7mprzanzx0o \
	I0829 18:26:22.437070   31894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 18:26:22.437086   31894 cni.go:84] Creating CNI manager for ""
	I0829 18:26:22.437093   31894 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0829 18:26:22.439028   31894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0829 18:26:22.440208   31894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0829 18:26:22.445542   31894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0829 18:26:22.445562   31894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0829 18:26:22.463693   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0829 18:26:22.849317   31894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:26:22.849415   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:26:22.849433   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-782425 minikube.k8s.io/updated_at=2024_08_29T18_26_22_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=ha-782425 minikube.k8s.io/primary=true
	I0829 18:26:22.895718   31894 ops.go:34] apiserver oom_adj: -16
	I0829 18:26:23.041273   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:26:23.541812   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:26:23.620308   31894 kubeadm.go:1113] duration metric: took 770.957594ms to wait for elevateKubeSystemPrivileges
	I0829 18:26:23.620352   31894 kubeadm.go:394] duration metric: took 16.059767851s to StartCluster
	I0829 18:26:23.620375   31894 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:23.620445   31894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:26:23.621113   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:23.621311   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:26:23.621318   31894 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:26:23.621334   31894 start.go:241] waiting for startup goroutines ...
	I0829 18:26:23.621341   31894 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 18:26:23.621382   31894 addons.go:69] Setting storage-provisioner=true in profile "ha-782425"
	I0829 18:26:23.621395   31894 addons.go:69] Setting default-storageclass=true in profile "ha-782425"
	I0829 18:26:23.621407   31894 addons.go:234] Setting addon storage-provisioner=true in "ha-782425"
	I0829 18:26:23.621427   31894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-782425"
	I0829 18:26:23.621430   31894 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:26:23.621518   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:26:23.621786   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:23.621817   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:23.621823   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:23.621850   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:23.636750   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0829 18:26:23.637235   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:23.637848   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:23.637883   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:23.637894   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
	I0829 18:26:23.638249   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:23.638298   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:23.638700   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:23.638723   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:23.638781   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:23.638805   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:23.639198   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:23.639403   31894 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:26:23.641586   31894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:26:23.641814   31894 kapi.go:59] client config for ha-782425: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.crt", KeyFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key", CAFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0829 18:26:23.642275   31894 cert_rotation.go:140] Starting client certificate rotation controller
	I0829 18:26:23.642432   31894 addons.go:234] Setting addon default-storageclass=true in "ha-782425"
	I0829 18:26:23.642470   31894 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:26:23.642730   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:23.642757   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:23.654166   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37559
	I0829 18:26:23.654595   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:23.655144   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:23.655166   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:23.655538   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:23.655731   31894 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:26:23.657530   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:23.657995   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34377
	I0829 18:26:23.658434   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:23.658926   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:23.658941   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:23.659283   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:23.659764   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:23.659817   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:23.659878   31894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:26:23.661072   31894 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:26:23.661086   31894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:26:23.661098   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:23.664372   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:23.664817   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:23.664883   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:23.665064   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:23.665249   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:23.665397   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:23.665524   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:23.675129   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33707
	I0829 18:26:23.675526   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:23.675939   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:23.675958   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:23.676286   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:23.676486   31894 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:26:23.678105   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:23.678309   31894 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:26:23.678328   31894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:26:23.678347   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:23.681147   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:23.681657   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:23.681687   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:23.681817   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:23.682006   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:23.682189   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:23.682327   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:23.781785   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:26:23.867153   31894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:26:23.874728   31894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:26:24.296714   31894 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0829 18:26:24.534402   31894 main.go:141] libmachine: Making call to close driver server
	I0829 18:26:24.534430   31894 main.go:141] libmachine: (ha-782425) Calling .Close
	I0829 18:26:24.534501   31894 main.go:141] libmachine: Making call to close driver server
	I0829 18:26:24.534533   31894 main.go:141] libmachine: (ha-782425) Calling .Close
	I0829 18:26:24.534760   31894 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:26:24.534774   31894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:26:24.534782   31894 main.go:141] libmachine: Making call to close driver server
	I0829 18:26:24.534788   31894 main.go:141] libmachine: (ha-782425) Calling .Close
	I0829 18:26:24.534899   31894 main.go:141] libmachine: (ha-782425) DBG | Closing plugin on server side
	I0829 18:26:24.534903   31894 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:26:24.534918   31894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:26:24.534949   31894 main.go:141] libmachine: Making call to close driver server
	I0829 18:26:24.534961   31894 main.go:141] libmachine: (ha-782425) Calling .Close
	I0829 18:26:24.535046   31894 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:26:24.535058   31894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:26:24.536174   31894 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:26:24.536185   31894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:26:24.536252   31894 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0829 18:26:24.536266   31894 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0829 18:26:24.536352   31894 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0829 18:26:24.536361   31894 round_trippers.go:469] Request Headers:
	I0829 18:26:24.536371   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:26:24.536375   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:26:24.550657   31894 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0829 18:26:24.551482   31894 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0829 18:26:24.551502   31894 round_trippers.go:469] Request Headers:
	I0829 18:26:24.551519   31894 round_trippers.go:473]     Content-Type: application/json
	I0829 18:26:24.551530   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:26:24.551536   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:26:24.554968   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:26:24.555329   31894 main.go:141] libmachine: Making call to close driver server
	I0829 18:26:24.555349   31894 main.go:141] libmachine: (ha-782425) Calling .Close
	I0829 18:26:24.555615   31894 main.go:141] libmachine: Successfully made call to close driver server
	I0829 18:26:24.555646   31894 main.go:141] libmachine: (ha-782425) DBG | Closing plugin on server side
	I0829 18:26:24.555663   31894 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 18:26:24.557267   31894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0829 18:26:24.558409   31894 addons.go:510] duration metric: took 937.060796ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0829 18:26:24.558452   31894 start.go:246] waiting for cluster config update ...
	I0829 18:26:24.558467   31894 start.go:255] writing updated cluster config ...
	I0829 18:26:24.559795   31894 out.go:201] 
	I0829 18:26:24.560958   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:26:24.561021   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:26:24.562366   31894 out.go:177] * Starting "ha-782425-m02" control-plane node in "ha-782425" cluster
	I0829 18:26:24.563288   31894 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:26:24.563317   31894 cache.go:56] Caching tarball of preloaded images
	I0829 18:26:24.563443   31894 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:26:24.563460   31894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:26:24.563556   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:26:24.563792   31894 start.go:360] acquireMachinesLock for ha-782425-m02: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:26:24.563849   31894 start.go:364] duration metric: took 30.889µs to acquireMachinesLock for "ha-782425-m02"
	I0829 18:26:24.563873   31894 start.go:93] Provisioning new machine with config: &{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:26:24.563984   31894 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0829 18:26:24.565373   31894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 18:26:24.565468   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:24.565499   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:24.579868   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36615
	I0829 18:26:24.580329   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:24.580779   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:24.580794   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:24.581121   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:24.581320   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetMachineName
	I0829 18:26:24.581467   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:24.581662   31894 start.go:159] libmachine.API.Create for "ha-782425" (driver="kvm2")
	I0829 18:26:24.581687   31894 client.go:168] LocalClient.Create starting
	I0829 18:26:24.581726   31894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem
	I0829 18:26:24.581767   31894 main.go:141] libmachine: Decoding PEM data...
	I0829 18:26:24.581790   31894 main.go:141] libmachine: Parsing certificate...
	I0829 18:26:24.581870   31894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem
	I0829 18:26:24.581897   31894 main.go:141] libmachine: Decoding PEM data...
	I0829 18:26:24.581917   31894 main.go:141] libmachine: Parsing certificate...
	I0829 18:26:24.581938   31894 main.go:141] libmachine: Running pre-create checks...
	I0829 18:26:24.581950   31894 main.go:141] libmachine: (ha-782425-m02) Calling .PreCreateCheck
	I0829 18:26:24.582114   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetConfigRaw
	I0829 18:26:24.582554   31894 main.go:141] libmachine: Creating machine...
	I0829 18:26:24.582572   31894 main.go:141] libmachine: (ha-782425-m02) Calling .Create
	I0829 18:26:24.582686   31894 main.go:141] libmachine: (ha-782425-m02) Creating KVM machine...
	I0829 18:26:24.583646   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found existing default KVM network
	I0829 18:26:24.583738   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found existing private KVM network mk-ha-782425
	I0829 18:26:24.583867   31894 main.go:141] libmachine: (ha-782425-m02) Setting up store path in /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02 ...
	I0829 18:26:24.583895   31894 main.go:141] libmachine: (ha-782425-m02) Building disk image from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 18:26:24.583942   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:24.583849   32252 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:26:24.584044   31894 main.go:141] libmachine: (ha-782425-m02) Downloading /home/jenkins/minikube-integration/19531-13056/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 18:26:24.812205   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:24.812048   32252 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa...
	I0829 18:26:25.012329   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:25.012158   32252 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/ha-782425-m02.rawdisk...
	I0829 18:26:25.012369   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Writing magic tar header
	I0829 18:26:25.012391   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Writing SSH key tar header
	I0829 18:26:25.012404   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:25.012268   32252 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02 ...
	I0829 18:26:25.012417   31894 main.go:141] libmachine: (ha-782425-m02) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02 (perms=drwx------)
	I0829 18:26:25.012433   31894 main.go:141] libmachine: (ha-782425-m02) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:26:25.012444   31894 main.go:141] libmachine: (ha-782425-m02) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube (perms=drwxr-xr-x)
	I0829 18:26:25.012458   31894 main.go:141] libmachine: (ha-782425-m02) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056 (perms=drwxrwxr-x)
	I0829 18:26:25.012479   31894 main.go:141] libmachine: (ha-782425-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:26:25.012497   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02
	I0829 18:26:25.012509   31894 main.go:141] libmachine: (ha-782425-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:26:25.012529   31894 main.go:141] libmachine: (ha-782425-m02) Creating domain...
	I0829 18:26:25.012556   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines
	I0829 18:26:25.012571   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:26:25.012601   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056
	I0829 18:26:25.012625   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:26:25.012636   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:26:25.012646   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Checking permissions on dir: /home
	I0829 18:26:25.012658   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Skipping /home - not owner
	I0829 18:26:25.013572   31894 main.go:141] libmachine: (ha-782425-m02) define libvirt domain using xml: 
	I0829 18:26:25.013596   31894 main.go:141] libmachine: (ha-782425-m02) <domain type='kvm'>
	I0829 18:26:25.013608   31894 main.go:141] libmachine: (ha-782425-m02)   <name>ha-782425-m02</name>
	I0829 18:26:25.013616   31894 main.go:141] libmachine: (ha-782425-m02)   <memory unit='MiB'>2200</memory>
	I0829 18:26:25.013645   31894 main.go:141] libmachine: (ha-782425-m02)   <vcpu>2</vcpu>
	I0829 18:26:25.013657   31894 main.go:141] libmachine: (ha-782425-m02)   <features>
	I0829 18:26:25.013666   31894 main.go:141] libmachine: (ha-782425-m02)     <acpi/>
	I0829 18:26:25.013677   31894 main.go:141] libmachine: (ha-782425-m02)     <apic/>
	I0829 18:26:25.013688   31894 main.go:141] libmachine: (ha-782425-m02)     <pae/>
	I0829 18:26:25.013699   31894 main.go:141] libmachine: (ha-782425-m02)     
	I0829 18:26:25.013720   31894 main.go:141] libmachine: (ha-782425-m02)   </features>
	I0829 18:26:25.013735   31894 main.go:141] libmachine: (ha-782425-m02)   <cpu mode='host-passthrough'>
	I0829 18:26:25.013741   31894 main.go:141] libmachine: (ha-782425-m02)   
	I0829 18:26:25.013747   31894 main.go:141] libmachine: (ha-782425-m02)   </cpu>
	I0829 18:26:25.013755   31894 main.go:141] libmachine: (ha-782425-m02)   <os>
	I0829 18:26:25.013759   31894 main.go:141] libmachine: (ha-782425-m02)     <type>hvm</type>
	I0829 18:26:25.013764   31894 main.go:141] libmachine: (ha-782425-m02)     <boot dev='cdrom'/>
	I0829 18:26:25.013771   31894 main.go:141] libmachine: (ha-782425-m02)     <boot dev='hd'/>
	I0829 18:26:25.013777   31894 main.go:141] libmachine: (ha-782425-m02)     <bootmenu enable='no'/>
	I0829 18:26:25.013784   31894 main.go:141] libmachine: (ha-782425-m02)   </os>
	I0829 18:26:25.013789   31894 main.go:141] libmachine: (ha-782425-m02)   <devices>
	I0829 18:26:25.013797   31894 main.go:141] libmachine: (ha-782425-m02)     <disk type='file' device='cdrom'>
	I0829 18:26:25.013806   31894 main.go:141] libmachine: (ha-782425-m02)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/boot2docker.iso'/>
	I0829 18:26:25.013816   31894 main.go:141] libmachine: (ha-782425-m02)       <target dev='hdc' bus='scsi'/>
	I0829 18:26:25.013847   31894 main.go:141] libmachine: (ha-782425-m02)       <readonly/>
	I0829 18:26:25.013869   31894 main.go:141] libmachine: (ha-782425-m02)     </disk>
	I0829 18:26:25.013882   31894 main.go:141] libmachine: (ha-782425-m02)     <disk type='file' device='disk'>
	I0829 18:26:25.013897   31894 main.go:141] libmachine: (ha-782425-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:26:25.013914   31894 main.go:141] libmachine: (ha-782425-m02)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/ha-782425-m02.rawdisk'/>
	I0829 18:26:25.013926   31894 main.go:141] libmachine: (ha-782425-m02)       <target dev='hda' bus='virtio'/>
	I0829 18:26:25.013938   31894 main.go:141] libmachine: (ha-782425-m02)     </disk>
	I0829 18:26:25.013960   31894 main.go:141] libmachine: (ha-782425-m02)     <interface type='network'>
	I0829 18:26:25.013974   31894 main.go:141] libmachine: (ha-782425-m02)       <source network='mk-ha-782425'/>
	I0829 18:26:25.013985   31894 main.go:141] libmachine: (ha-782425-m02)       <model type='virtio'/>
	I0829 18:26:25.013996   31894 main.go:141] libmachine: (ha-782425-m02)     </interface>
	I0829 18:26:25.014007   31894 main.go:141] libmachine: (ha-782425-m02)     <interface type='network'>
	I0829 18:26:25.014018   31894 main.go:141] libmachine: (ha-782425-m02)       <source network='default'/>
	I0829 18:26:25.014029   31894 main.go:141] libmachine: (ha-782425-m02)       <model type='virtio'/>
	I0829 18:26:25.014041   31894 main.go:141] libmachine: (ha-782425-m02)     </interface>
	I0829 18:26:25.014051   31894 main.go:141] libmachine: (ha-782425-m02)     <serial type='pty'>
	I0829 18:26:25.014073   31894 main.go:141] libmachine: (ha-782425-m02)       <target port='0'/>
	I0829 18:26:25.014081   31894 main.go:141] libmachine: (ha-782425-m02)     </serial>
	I0829 18:26:25.014102   31894 main.go:141] libmachine: (ha-782425-m02)     <console type='pty'>
	I0829 18:26:25.014121   31894 main.go:141] libmachine: (ha-782425-m02)       <target type='serial' port='0'/>
	I0829 18:26:25.014137   31894 main.go:141] libmachine: (ha-782425-m02)     </console>
	I0829 18:26:25.014150   31894 main.go:141] libmachine: (ha-782425-m02)     <rng model='virtio'>
	I0829 18:26:25.014161   31894 main.go:141] libmachine: (ha-782425-m02)       <backend model='random'>/dev/random</backend>
	I0829 18:26:25.014169   31894 main.go:141] libmachine: (ha-782425-m02)     </rng>
	I0829 18:26:25.014176   31894 main.go:141] libmachine: (ha-782425-m02)     
	I0829 18:26:25.014187   31894 main.go:141] libmachine: (ha-782425-m02)     
	I0829 18:26:25.014198   31894 main.go:141] libmachine: (ha-782425-m02)   </devices>
	I0829 18:26:25.014209   31894 main.go:141] libmachine: (ha-782425-m02) </domain>
	I0829 18:26:25.014222   31894 main.go:141] libmachine: (ha-782425-m02) 
	I0829 18:26:25.020795   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:87:5f:42 in network default
	I0829 18:26:25.021324   31894 main.go:141] libmachine: (ha-782425-m02) Ensuring networks are active...
	I0829 18:26:25.021348   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:25.022028   31894 main.go:141] libmachine: (ha-782425-m02) Ensuring network default is active
	I0829 18:26:25.022391   31894 main.go:141] libmachine: (ha-782425-m02) Ensuring network mk-ha-782425 is active
	I0829 18:26:25.022758   31894 main.go:141] libmachine: (ha-782425-m02) Getting domain xml...
	I0829 18:26:25.023485   31894 main.go:141] libmachine: (ha-782425-m02) Creating domain...
	I0829 18:26:26.229097   31894 main.go:141] libmachine: (ha-782425-m02) Waiting to get IP...
	I0829 18:26:26.229953   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:26.230456   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:26.230482   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:26.230409   32252 retry.go:31] will retry after 237.142818ms: waiting for machine to come up
	I0829 18:26:26.469824   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:26.470329   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:26.470361   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:26.470277   32252 retry.go:31] will retry after 242.315813ms: waiting for machine to come up
	I0829 18:26:26.713718   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:26.714266   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:26.714296   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:26.714217   32252 retry.go:31] will retry after 341.179806ms: waiting for machine to come up
	I0829 18:26:27.056776   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:27.057265   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:27.057294   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:27.057217   32252 retry.go:31] will retry after 595.192989ms: waiting for machine to come up
	I0829 18:26:27.653881   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:27.654386   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:27.654424   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:27.654332   32252 retry.go:31] will retry after 521.996873ms: waiting for machine to come up
	I0829 18:26:28.177994   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:28.178365   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:28.178393   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:28.178331   32252 retry.go:31] will retry after 887.019406ms: waiting for machine to come up
	I0829 18:26:29.067331   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:29.067765   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:29.067802   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:29.067761   32252 retry.go:31] will retry after 881.071096ms: waiting for machine to come up
	I0829 18:26:29.949908   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:29.950225   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:29.950246   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:29.950203   32252 retry.go:31] will retry after 971.946782ms: waiting for machine to come up
	I0829 18:26:30.924291   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:30.924673   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:30.924707   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:30.924637   32252 retry.go:31] will retry after 1.32152902s: waiting for machine to come up
	I0829 18:26:32.248043   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:32.248448   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:32.248474   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:32.248405   32252 retry.go:31] will retry after 1.905467671s: waiting for machine to come up
	I0829 18:26:34.155199   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:34.155548   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:34.155578   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:34.155497   32252 retry.go:31] will retry after 2.896327126s: waiting for machine to come up
	I0829 18:26:37.054991   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:37.055413   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:37.055457   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:37.055369   32252 retry.go:31] will retry after 2.938271443s: waiting for machine to come up
	I0829 18:26:39.995460   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:39.995861   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:39.995887   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:39.995826   32252 retry.go:31] will retry after 3.097722772s: waiting for machine to come up
	I0829 18:26:43.095812   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:43.096180   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find current IP address of domain ha-782425-m02 in network mk-ha-782425
	I0829 18:26:43.096202   31894 main.go:141] libmachine: (ha-782425-m02) DBG | I0829 18:26:43.096138   32252 retry.go:31] will retry after 5.653782019s: waiting for machine to come up
	I0829 18:26:48.754518   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:48.754970   31894 main.go:141] libmachine: (ha-782425-m02) Found IP for machine: 192.168.39.253
	I0829 18:26:48.754996   31894 main.go:141] libmachine: (ha-782425-m02) Reserving static IP address...
	I0829 18:26:48.755009   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has current primary IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:48.755387   31894 main.go:141] libmachine: (ha-782425-m02) DBG | unable to find host DHCP lease matching {name: "ha-782425-m02", mac: "52:54:00:15:79:c5", ip: "192.168.39.253"} in network mk-ha-782425
	I0829 18:26:48.824716   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Getting to WaitForSSH function...
	I0829 18:26:48.824744   31894 main.go:141] libmachine: (ha-782425-m02) Reserved static IP address: 192.168.39.253
	I0829 18:26:48.824757   31894 main.go:141] libmachine: (ha-782425-m02) Waiting for SSH to be available...
	I0829 18:26:48.827487   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:48.827905   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:48.827937   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:48.828060   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Using SSH client type: external
	I0829 18:26:48.828083   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa (-rw-------)
	I0829 18:26:48.828111   31894 main.go:141] libmachine: (ha-782425-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:26:48.828124   31894 main.go:141] libmachine: (ha-782425-m02) DBG | About to run SSH command:
	I0829 18:26:48.828213   31894 main.go:141] libmachine: (ha-782425-m02) DBG | exit 0
	I0829 18:26:48.950130   31894 main.go:141] libmachine: (ha-782425-m02) DBG | SSH cmd err, output: <nil>: 
	I0829 18:26:48.950378   31894 main.go:141] libmachine: (ha-782425-m02) KVM machine creation complete!
	I0829 18:26:48.950774   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetConfigRaw
	I0829 18:26:48.951236   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:48.951416   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:48.951620   31894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:26:48.951640   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:26:48.952783   31894 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:26:48.952795   31894 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:26:48.952800   31894 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:26:48.952806   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:48.955023   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:48.955373   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:48.955400   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:48.955530   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:48.955707   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:48.955859   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:48.956021   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:48.956191   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:48.956388   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0829 18:26:48.956397   31894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:26:49.057053   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:26:49.057081   31894 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:26:49.057092   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.059825   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.060176   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.060198   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.060366   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:49.060522   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.060689   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.060816   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:49.060948   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:49.061103   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0829 18:26:49.061114   31894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:26:49.158598   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:26:49.158654   31894 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:26:49.158661   31894 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:26:49.158668   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetMachineName
	I0829 18:26:49.158943   31894 buildroot.go:166] provisioning hostname "ha-782425-m02"
	I0829 18:26:49.158973   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetMachineName
	I0829 18:26:49.159180   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.161715   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.162138   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.162164   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.162301   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:49.162472   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.162613   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.162734   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:49.162859   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:49.163113   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0829 18:26:49.163135   31894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-782425-m02 && echo "ha-782425-m02" | sudo tee /etc/hostname
	I0829 18:26:49.271395   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-782425-m02
	
	I0829 18:26:49.271419   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.274146   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.274575   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.274606   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.274764   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:49.274952   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.275078   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.275243   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:49.275399   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:49.275553   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0829 18:26:49.275567   31894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-782425-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-782425-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-782425-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:26:49.378107   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:26:49.378139   31894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:26:49.378155   31894 buildroot.go:174] setting up certificates
	I0829 18:26:49.378162   31894 provision.go:84] configureAuth start
	I0829 18:26:49.378170   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetMachineName
	I0829 18:26:49.378449   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:26:49.381117   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.381453   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.381485   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.381615   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.383655   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.383942   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.383963   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.384090   31894 provision.go:143] copyHostCerts
	I0829 18:26:49.384120   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:26:49.384149   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 18:26:49.384158   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:26:49.384221   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:26:49.384290   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:26:49.384307   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 18:26:49.384314   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:26:49.384338   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:26:49.384382   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:26:49.384400   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 18:26:49.384406   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:26:49.384425   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:26:49.384472   31894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.ha-782425-m02 san=[127.0.0.1 192.168.39.253 ha-782425-m02 localhost minikube]
	I0829 18:26:49.532968   31894 provision.go:177] copyRemoteCerts
	I0829 18:26:49.533025   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:26:49.533048   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.535572   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.535900   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.535929   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.536080   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:49.536237   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.536361   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:49.536456   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	I0829 18:26:49.611693   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 18:26:49.611749   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:26:49.634177   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 18:26:49.634250   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:26:49.658566   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 18:26:49.658661   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:26:49.683473   31894 provision.go:87] duration metric: took 305.298786ms to configureAuth
	I0829 18:26:49.683495   31894 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:26:49.683689   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:26:49.683765   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.686349   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.686849   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.686885   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.687061   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:49.687228   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.687354   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.687470   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:49.687658   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:49.687843   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0829 18:26:49.687859   31894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:26:49.896518   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:26:49.896541   31894 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:26:49.896551   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetURL
	I0829 18:26:49.897762   31894 main.go:141] libmachine: (ha-782425-m02) DBG | Using libvirt version 6000000
	I0829 18:26:49.899894   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.900353   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.900387   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.900522   31894 main.go:141] libmachine: Docker is up and running!
	I0829 18:26:49.900537   31894 main.go:141] libmachine: Reticulating splines...
	I0829 18:26:49.900544   31894 client.go:171] duration metric: took 25.318847548s to LocalClient.Create
	I0829 18:26:49.900564   31894 start.go:167] duration metric: took 25.318905692s to libmachine.API.Create "ha-782425"
	I0829 18:26:49.900575   31894 start.go:293] postStartSetup for "ha-782425-m02" (driver="kvm2")
	I0829 18:26:49.900588   31894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:26:49.900617   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:49.900833   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:26:49.900856   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:49.903094   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.903457   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:49.903483   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:49.903600   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:49.903780   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:49.903938   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:49.904071   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	I0829 18:26:49.979923   31894 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:26:49.983726   31894 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:26:49.983748   31894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 18:26:49.983804   31894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 18:26:49.983870   31894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 18:26:49.983880   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /etc/ssl/certs/202592.pem
	I0829 18:26:49.983955   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 18:26:49.992355   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:26:50.013967   31894 start.go:296] duration metric: took 113.380706ms for postStartSetup
	I0829 18:26:50.014019   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetConfigRaw
	I0829 18:26:50.014605   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:26:50.017312   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.017650   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:50.017671   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.017867   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:26:50.018069   31894 start.go:128] duration metric: took 25.454075609s to createHost
	I0829 18:26:50.018104   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:50.020313   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.020652   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:50.020675   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.020813   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:50.020971   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:50.021108   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:50.021259   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:50.021420   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:26:50.021615   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0829 18:26:50.021627   31894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:26:50.114540   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724956010.095185222
	
	I0829 18:26:50.114564   31894 fix.go:216] guest clock: 1724956010.095185222
	I0829 18:26:50.114573   31894 fix.go:229] Guest: 2024-08-29 18:26:50.095185222 +0000 UTC Remote: 2024-08-29 18:26:50.018079841 +0000 UTC m=+72.186075366 (delta=77.105381ms)
	I0829 18:26:50.114605   31894 fix.go:200] guest clock delta is within tolerance: 77.105381ms
	I0829 18:26:50.114612   31894 start.go:83] releasing machines lock for "ha-782425-m02", held for 25.550749818s
	I0829 18:26:50.114634   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:50.114882   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:26:50.117266   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.117616   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:50.117645   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.119624   31894 out.go:177] * Found network options:
	I0829 18:26:50.120677   31894 out.go:177]   - NO_PROXY=192.168.39.39
	W0829 18:26:50.121590   31894 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 18:26:50.121613   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:50.122163   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:50.122361   31894 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:26:50.122475   31894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:26:50.122508   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	W0829 18:26:50.122535   31894 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 18:26:50.122608   31894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:26:50.122626   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:26:50.125046   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.125190   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.125427   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:50.125452   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.125553   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:50.125656   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:50.125692   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:50.125754   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:50.125826   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:26:50.125894   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:50.126034   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:26:50.126052   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	I0829 18:26:50.126217   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:26:50.126372   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	I0829 18:26:50.349617   31894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:26:50.355355   31894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:26:50.355428   31894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:26:50.370751   31894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:26:50.370778   31894 start.go:495] detecting cgroup driver to use...
	I0829 18:26:50.370852   31894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:26:50.385898   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:26:50.399592   31894 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:26:50.399667   31894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:26:50.413250   31894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:26:50.427350   31894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:26:50.541879   31894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:26:50.692562   31894 docker.go:233] disabling docker service ...
	I0829 18:26:50.692650   31894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:26:50.707727   31894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:26:50.720199   31894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:26:50.866477   31894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:26:50.989936   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:26:51.003683   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:26:51.023184   31894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:26:51.023256   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.032770   31894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:26:51.032828   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.042672   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.052846   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.062397   31894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:26:51.072081   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.081582   31894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.098364   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:26:51.108109   31894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:26:51.117022   31894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 18:26:51.117077   31894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 18:26:51.128752   31894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:26:51.137880   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:26:51.261126   31894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:26:51.347424   31894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:26:51.347554   31894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:26:51.352210   31894 start.go:563] Will wait 60s for crictl version
	I0829 18:26:51.352272   31894 ssh_runner.go:195] Run: which crictl
	I0829 18:26:51.355953   31894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:26:51.391213   31894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:26:51.391285   31894 ssh_runner.go:195] Run: crio --version
	I0829 18:26:51.418270   31894 ssh_runner.go:195] Run: crio --version
	I0829 18:26:51.445893   31894 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:26:51.447167   31894 out.go:177]   - env NO_PROXY=192.168.39.39
	I0829 18:26:51.448349   31894 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:26:51.450818   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:51.451141   31894 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:26:38 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:26:51.451169   31894 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:26:51.451372   31894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:26:51.455456   31894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:26:51.467458   31894 mustload.go:65] Loading cluster: ha-782425
	I0829 18:26:51.467649   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:26:51.467904   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:51.467937   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:51.482321   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43063
	I0829 18:26:51.482756   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:51.483190   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:51.483210   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:51.483572   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:51.483755   31894 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:26:51.485349   31894 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:26:51.485627   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:51.485686   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:51.500890   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33409
	I0829 18:26:51.501294   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:51.501713   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:51.501740   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:51.502059   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:51.502268   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:51.502424   31894 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425 for IP: 192.168.39.253
	I0829 18:26:51.502438   31894 certs.go:194] generating shared ca certs ...
	I0829 18:26:51.502456   31894 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:51.502597   31894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 18:26:51.502643   31894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 18:26:51.502653   31894 certs.go:256] generating profile certs ...
	I0829 18:26:51.502720   31894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key
	I0829 18:26:51.502744   31894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.f45910f6
	I0829 18:26:51.502756   31894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.f45910f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.39 192.168.39.253 192.168.39.254]
	I0829 18:26:51.698684   31894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.f45910f6 ...
	I0829 18:26:51.698716   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.f45910f6: {Name:mkf0e9d9ffd254e920b63ad96df28873faca93cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:51.698891   31894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.f45910f6 ...
	I0829 18:26:51.698904   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.f45910f6: {Name:mk6960e3e0d1e62eafe3259930954d26962a40f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:26:51.698983   31894 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.f45910f6 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt
	I0829 18:26:51.699126   31894 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.f45910f6 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key
	I0829 18:26:51.699258   31894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key
	I0829 18:26:51.699276   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 18:26:51.699290   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 18:26:51.699312   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 18:26:51.699328   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 18:26:51.699343   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 18:26:51.699358   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 18:26:51.699373   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 18:26:51.699388   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 18:26:51.699441   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 18:26:51.699473   31894 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 18:26:51.699483   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:26:51.699509   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:26:51.699540   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:26:51.699565   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 18:26:51.699606   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:26:51.699634   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:51.699651   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem -> /usr/share/ca-certificates/20259.pem
	I0829 18:26:51.699665   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /usr/share/ca-certificates/202592.pem
	I0829 18:26:51.699699   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:51.702662   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:51.703051   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:51.703077   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:51.703281   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:51.703469   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:51.703636   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:51.703777   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:51.778482   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0829 18:26:51.783452   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0829 18:26:51.794645   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0829 18:26:51.805231   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0829 18:26:51.817768   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0829 18:26:51.821821   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0829 18:26:51.833444   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0829 18:26:51.838413   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0829 18:26:51.851612   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0829 18:26:51.860669   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0829 18:26:51.872429   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0829 18:26:51.876283   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0829 18:26:51.887468   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:26:51.911598   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:26:51.933833   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:26:51.955722   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:26:51.976904   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0829 18:26:51.997635   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 18:26:52.019051   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:26:52.040223   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:26:52.061308   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:26:52.082293   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 18:26:52.103597   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 18:26:52.125881   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0829 18:26:52.142365   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0829 18:26:52.157971   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0829 18:26:52.178944   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0829 18:26:52.195384   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0829 18:26:52.210359   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0829 18:26:52.226862   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0829 18:26:52.241892   31894 ssh_runner.go:195] Run: openssl version
	I0829 18:26:52.247232   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:26:52.257482   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:52.261899   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:52.261957   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:26:52.267217   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:26:52.277075   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 18:26:52.287868   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 18:26:52.292034   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 18:26:52.292087   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 18:26:52.297500   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 18:26:52.307519   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 18:26:52.317521   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 18:26:52.321727   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 18:26:52.321778   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 18:26:52.327184   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 18:26:52.337920   31894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:26:52.341915   31894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:26:52.341977   31894 kubeadm.go:934] updating node {m02 192.168.39.253 8443 v1.31.0 crio true true} ...
	I0829 18:26:52.342064   31894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-782425-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:26:52.342119   31894 kube-vip.go:115] generating kube-vip config ...
	I0829 18:26:52.342166   31894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 18:26:52.359964   31894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 18:26:52.360047   31894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0829 18:26:52.360114   31894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:26:52.369722   31894 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0829 18:26:52.369812   31894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0829 18:26:52.378997   31894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0829 18:26:52.379029   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 18:26:52.379043   31894 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0829 18:26:52.379102   31894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 18:26:52.379046   31894 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0829 18:26:52.383270   31894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0829 18:26:52.383302   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0829 18:26:53.331385   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 18:26:53.331488   31894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 18:26:53.336704   31894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0829 18:26:53.336745   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0829 18:26:53.471271   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:26:53.507741   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 18:26:53.507857   31894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 18:26:53.523679   31894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0829 18:26:53.523720   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0829 18:26:53.864618   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0829 18:26:53.874698   31894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0829 18:26:53.890036   31894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:26:53.905409   31894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0829 18:26:53.920758   31894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 18:26:53.924420   31894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:26:53.936824   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:26:54.059981   31894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:26:54.076111   31894 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:26:54.076445   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:26:54.076492   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:26:54.091747   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40341
	I0829 18:26:54.092196   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:26:54.092730   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:26:54.092755   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:26:54.093141   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:26:54.093353   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:26:54.093507   31894 start.go:317] joinCluster: &{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:26:54.093623   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0829 18:26:54.093649   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:26:54.096423   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:54.096918   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:26:54.096944   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:26:54.097130   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:26:54.097307   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:26:54.097457   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:26:54.097586   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:26:54.239537   31894 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:26:54.239582   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0avp5s.23nn67rbaqfsi40a --discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-782425-m02 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443"
	I0829 18:27:15.021821   31894 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0avp5s.23nn67rbaqfsi40a --discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-782425-m02 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443": (20.782191808s)
	I0829 18:27:15.021888   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0829 18:27:15.612755   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-782425-m02 minikube.k8s.io/updated_at=2024_08_29T18_27_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=ha-782425 minikube.k8s.io/primary=false
	I0829 18:27:15.731860   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-782425-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0829 18:27:15.862540   31894 start.go:319] duration metric: took 21.769029029s to joinCluster
	I0829 18:27:15.862630   31894 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:27:15.862962   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:27:15.864096   31894 out.go:177] * Verifying Kubernetes components...
	I0829 18:27:15.865276   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:27:16.124824   31894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:27:16.172898   31894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:27:16.173244   31894 kapi.go:59] client config for ha-782425: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.crt", KeyFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key", CAFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0829 18:27:16.173320   31894 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.39:8443
	I0829 18:27:16.173613   31894 node_ready.go:35] waiting up to 6m0s for node "ha-782425-m02" to be "Ready" ...
	I0829 18:27:16.173770   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:16.173786   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:16.173796   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:16.173808   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:16.184897   31894 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0829 18:27:16.673841   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:16.673863   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:16.673871   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:16.673876   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:16.685724   31894 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0829 18:27:17.174652   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:17.174676   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:17.174685   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:17.174688   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:17.183591   31894 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0829 18:27:17.673859   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:17.673879   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:17.673888   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:17.673892   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:17.676930   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:18.173805   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:18.173828   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:18.173835   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:18.173839   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:18.177547   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:18.178015   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:18.674084   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:18.674122   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:18.674130   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:18.674135   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:18.677314   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:19.174530   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:19.174558   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:19.174569   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:19.174574   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:19.178294   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:19.674721   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:19.674748   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:19.674756   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:19.674759   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:19.678013   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:20.174266   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:20.174293   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:20.174309   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:20.174316   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:20.177447   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:20.178203   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:20.674531   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:20.674550   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:20.674558   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:20.674562   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:20.677874   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:21.174774   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:21.174800   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:21.174812   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:21.174818   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:21.179000   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:21.673783   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:21.673806   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:21.673816   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:21.673824   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:21.677004   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:22.173909   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:22.173934   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:22.173942   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:22.173947   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:22.193187   31894 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0829 18:27:22.193743   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:22.673989   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:22.674020   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:22.674032   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:22.674038   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:22.680063   31894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0829 18:27:23.174433   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:23.174453   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:23.174461   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:23.174466   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:23.177961   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:23.674354   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:23.674378   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:23.674390   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:23.674398   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:23.680636   31894 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0829 18:27:24.173781   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:24.173805   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:24.173814   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:24.173821   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:24.177001   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:24.674828   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:24.674851   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:24.674859   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:24.674863   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:24.678063   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:24.678649   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:25.173897   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:25.173919   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:25.173927   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:25.173935   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:25.176807   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:25.674819   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:25.674846   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:25.674857   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:25.674863   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:25.677776   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:26.173778   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:26.173801   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:26.173809   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:26.173812   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:26.176825   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:26.674798   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:26.674821   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:26.674830   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:26.674834   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:26.677805   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:27.174452   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:27.174478   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:27.174488   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:27.174492   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:27.177827   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:27.178363   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:27.674779   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:27.674802   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:27.674809   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:27.674814   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:27.677457   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:28.173974   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:28.173992   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:28.173999   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:28.174002   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:28.176837   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:28.674759   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:28.674782   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:28.674790   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:28.674795   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:28.678731   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:29.173798   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:29.173817   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:29.173825   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:29.173828   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:29.176964   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:29.674283   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:29.674305   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:29.674312   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:29.674318   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:29.677442   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:29.677913   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:30.174410   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:30.174433   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:30.174445   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:30.174452   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:30.177537   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:30.674265   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:30.674289   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:30.674297   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:30.674300   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:30.677897   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:31.173978   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:31.174003   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:31.174011   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:31.174016   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:31.177139   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:31.674159   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:31.674181   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:31.674190   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:31.674194   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:31.677204   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:32.174257   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:32.174280   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:32.174288   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:32.174291   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:32.179759   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:27:32.180267   31894 node_ready.go:53] node "ha-782425-m02" has status "Ready":"False"
	I0829 18:27:32.674573   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:32.674599   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:32.674611   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:32.674618   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:32.678139   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:33.174708   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:33.174730   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:33.174738   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:33.174742   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:33.177578   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:33.674591   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:33.674614   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:33.674622   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:33.674625   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:33.678068   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:34.174786   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:34.174809   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.174817   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.174820   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.178381   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:34.178993   31894 node_ready.go:49] node "ha-782425-m02" has status "Ready":"True"
	I0829 18:27:34.179011   31894 node_ready.go:38] duration metric: took 18.005376284s for node "ha-782425-m02" to be "Ready" ...
	I0829 18:27:34.179020   31894 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:27:34.179102   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:27:34.179115   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.179122   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.179127   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.183202   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:34.191791   31894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nw2x2" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.191876   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-nw2x2
	I0829 18:27:34.191887   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.191896   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.191905   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.196079   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:34.196953   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:34.196970   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.196979   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.196986   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.199883   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.200457   31894 pod_ready.go:93] pod "coredns-6f6b679f8f-nw2x2" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:34.200476   31894 pod_ready.go:82] duration metric: took 8.659056ms for pod "coredns-6f6b679f8f-nw2x2" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.200486   31894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qhxm5" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.200548   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qhxm5
	I0829 18:27:34.200558   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.200565   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.200575   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.203309   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.203892   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:34.203908   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.203917   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.203923   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.206392   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.206857   31894 pod_ready.go:93] pod "coredns-6f6b679f8f-qhxm5" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:34.206873   31894 pod_ready.go:82] duration metric: took 6.38056ms for pod "coredns-6f6b679f8f-qhxm5" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.206882   31894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.206924   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/etcd-ha-782425
	I0829 18:27:34.206931   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.206938   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.206942   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.209466   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.210151   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:34.210167   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.210177   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.210182   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.212469   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.212939   31894 pod_ready.go:93] pod "etcd-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:34.212959   31894 pod_ready.go:82] duration metric: took 6.070221ms for pod "etcd-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.212970   31894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.213032   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/etcd-ha-782425-m02
	I0829 18:27:34.213042   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.213052   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.213060   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.215836   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.216488   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:34.216505   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.216515   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.216521   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.219029   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:34.219488   31894 pod_ready.go:93] pod "etcd-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:34.219505   31894 pod_ready.go:82] duration metric: took 6.524275ms for pod "etcd-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.219521   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.374829   31894 request.go:632] Waited for 155.237189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425
	I0829 18:27:34.374892   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425
	I0829 18:27:34.374899   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.374909   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.374918   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.378443   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:34.575302   31894 request.go:632] Waited for 196.186988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:34.575357   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:34.575363   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.575370   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.575374   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.578698   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:34.579088   31894 pod_ready.go:93] pod "kube-apiserver-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:34.579104   31894 pod_ready.go:82] duration metric: took 359.570997ms for pod "kube-apiserver-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.579112   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.775532   31894 request.go:632] Waited for 196.367952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425-m02
	I0829 18:27:34.775624   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425-m02
	I0829 18:27:34.775632   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.775639   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.775643   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.779877   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:34.975797   31894 request.go:632] Waited for 195.36549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:34.975880   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:34.975891   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:34.975901   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:34.975910   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:34.979290   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:34.979813   31894 pod_ready.go:93] pod "kube-apiserver-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:34.979831   31894 pod_ready.go:82] duration metric: took 400.713484ms for pod "kube-apiserver-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:34.979841   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:35.174927   31894 request.go:632] Waited for 195.018055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425
	I0829 18:27:35.174988   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425
	I0829 18:27:35.174992   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:35.175000   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:35.175004   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:35.178232   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:35.375378   31894 request.go:632] Waited for 196.371474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:35.375427   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:35.375433   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:35.375440   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:35.375445   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:35.378937   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:35.379567   31894 pod_ready.go:93] pod "kube-controller-manager-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:35.379588   31894 pod_ready.go:82] duration metric: took 399.738929ms for pod "kube-controller-manager-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:35.379604   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:35.575600   31894 request.go:632] Waited for 195.935535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425-m02
	I0829 18:27:35.575675   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425-m02
	I0829 18:27:35.575680   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:35.575688   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:35.575692   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:35.578977   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:35.775052   31894 request.go:632] Waited for 195.310084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:35.775107   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:35.775112   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:35.775119   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:35.775123   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:35.778281   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:35.778974   31894 pod_ready.go:93] pod "kube-controller-manager-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:35.778993   31894 pod_ready.go:82] duration metric: took 399.382265ms for pod "kube-controller-manager-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:35.779002   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5k8xr" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:35.975096   31894 request.go:632] Waited for 196.038385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5k8xr
	I0829 18:27:35.975191   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5k8xr
	I0829 18:27:35.975203   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:35.975214   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:35.975222   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:35.979773   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:36.174894   31894 request.go:632] Waited for 194.298911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:36.174953   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:36.174962   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:36.174973   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:36.174977   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:36.178216   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:36.178744   31894 pod_ready.go:93] pod "kube-proxy-5k8xr" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:36.178762   31894 pod_ready.go:82] duration metric: took 399.754717ms for pod "kube-proxy-5k8xr" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:36.178772   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d5kbx" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:36.375796   31894 request.go:632] Waited for 196.967983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5kbx
	I0829 18:27:36.375874   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5kbx
	I0829 18:27:36.375886   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:36.375896   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:36.375904   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:36.379499   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:36.575244   31894 request.go:632] Waited for 194.690586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:36.575296   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:36.575302   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:36.575309   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:36.575313   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:36.578693   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:36.579177   31894 pod_ready.go:93] pod "kube-proxy-d5kbx" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:36.579194   31894 pod_ready.go:82] duration metric: took 400.417285ms for pod "kube-proxy-d5kbx" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:36.579204   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:36.775441   31894 request.go:632] Waited for 196.152904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425
	I0829 18:27:36.775501   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425
	I0829 18:27:36.775506   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:36.775513   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:36.775520   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:36.779261   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:36.975299   31894 request.go:632] Waited for 195.363204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:36.975353   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:27:36.975359   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:36.975366   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:36.975371   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:36.978496   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:36.979108   31894 pod_ready.go:93] pod "kube-scheduler-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:36.979128   31894 pod_ready.go:82] duration metric: took 399.917184ms for pod "kube-scheduler-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:36.979139   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:37.175159   31894 request.go:632] Waited for 195.953066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425-m02
	I0829 18:27:37.175232   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425-m02
	I0829 18:27:37.175237   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:37.175244   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:37.175248   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:37.177949   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:37.375816   31894 request.go:632] Waited for 197.404743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:37.375886   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:27:37.375891   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:37.375899   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:37.375904   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:37.378860   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:27:37.379533   31894 pod_ready.go:93] pod "kube-scheduler-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:27:37.379552   31894 pod_ready.go:82] duration metric: took 400.406126ms for pod "kube-scheduler-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:27:37.379565   31894 pod_ready.go:39] duration metric: took 3.200534207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:27:37.379587   31894 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:27:37.379643   31894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:27:37.393015   31894 api_server.go:72] duration metric: took 21.530341114s to wait for apiserver process to appear ...
	I0829 18:27:37.393037   31894 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:27:37.393061   31894 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I0829 18:27:37.399471   31894 api_server.go:279] https://192.168.39.39:8443/healthz returned 200:
	ok
	I0829 18:27:37.399528   31894 round_trippers.go:463] GET https://192.168.39.39:8443/version
	I0829 18:27:37.399535   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:37.399543   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:37.399548   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:37.400367   31894 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0829 18:27:37.400474   31894 api_server.go:141] control plane version: v1.31.0
	I0829 18:27:37.400492   31894 api_server.go:131] duration metric: took 7.448915ms to wait for apiserver health ...
	I0829 18:27:37.400499   31894 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:27:37.574794   31894 request.go:632] Waited for 174.234454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:27:37.574875   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:27:37.574884   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:37.574893   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:37.574897   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:37.580231   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:27:37.584653   31894 system_pods.go:59] 17 kube-system pods found
	I0829 18:27:37.584686   31894 system_pods.go:61] "coredns-6f6b679f8f-nw2x2" [ab54ce43-4bd7-43ff-aad9-5cac2beb035b] Running
	I0829 18:27:37.584691   31894 system_pods.go:61] "coredns-6f6b679f8f-qhxm5" [286ec4e7-9401-4bdd-b8b2-86f00f130fc2] Running
	I0829 18:27:37.584696   31894 system_pods.go:61] "etcd-ha-782425" [743c3f2f-c86c-4f74-a7ef-9c95c0af0857] Running
	I0829 18:27:37.584699   31894 system_pods.go:61] "etcd-ha-782425-m02" [e70a5056-2675-48cf-8275-a630a1086c60] Running
	I0829 18:27:37.584702   31894 system_pods.go:61] "kindnet-7l5kn" [1a9ac71b-acaf-4ac9-b330-943525137d23] Running
	I0829 18:27:37.584705   31894 system_pods.go:61] "kindnet-kw2zk" [61a4cb33-47d5-4dd2-8711-d2524cf1133c] Running
	I0829 18:27:37.584708   31894 system_pods.go:61] "kube-apiserver-ha-782425" [b51e7db3-35e5-4e46-aeb4-9e98bfecd2a3] Running
	I0829 18:27:37.584711   31894 system_pods.go:61] "kube-apiserver-ha-782425-m02" [c1faa8f8-b5fd-41e7-bee3-dcdd6f4f06cc] Running
	I0829 18:27:37.584715   31894 system_pods.go:61] "kube-controller-manager-ha-782425" [008c32bf-b8f4-4cbe-a550-3820a3980f8f] Running
	I0829 18:27:37.584721   31894 system_pods.go:61] "kube-controller-manager-ha-782425-m02" [fcfc6d1d-ef6d-4b04-a86f-08d92de0883e] Running
	I0829 18:27:37.584724   31894 system_pods.go:61] "kube-proxy-5k8xr" [d07a092c-2a97-4bc5-ba9e-f0bf1022df8e] Running
	I0829 18:27:37.584727   31894 system_pods.go:61] "kube-proxy-d5kbx" [9033b7fd-0da5-4558-8c52-0ba06a7a4704] Running
	I0829 18:27:37.584730   31894 system_pods.go:61] "kube-scheduler-ha-782425" [72ba768c-61dd-4c95-a640-cdc3782b6f4c] Running
	I0829 18:27:37.584735   31894 system_pods.go:61] "kube-scheduler-ha-782425-m02" [56fa0075-25e4-42b7-b7b1-1b6d55643fcd] Running
	I0829 18:27:37.584738   31894 system_pods.go:61] "kube-vip-ha-782425" [83b3c3eb-b05b-47de-bc2a-ee1822b50b77] Running
	I0829 18:27:37.584741   31894 system_pods.go:61] "kube-vip-ha-782425-m02" [9655f7bc-ba21-4a7b-b223-18e52c655972] Running
	I0829 18:27:37.584744   31894 system_pods.go:61] "storage-provisioner" [f41ebca1-035e-44b0-96a2-3aa1e794bc1f] Running
	I0829 18:27:37.584751   31894 system_pods.go:74] duration metric: took 184.247241ms to wait for pod list to return data ...
	I0829 18:27:37.584758   31894 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:27:37.775208   31894 request.go:632] Waited for 190.357456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/default/serviceaccounts
	I0829 18:27:37.775264   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/default/serviceaccounts
	I0829 18:27:37.775269   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:37.775276   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:37.775281   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:37.779856   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:37.780083   31894 default_sa.go:45] found service account: "default"
	I0829 18:27:37.780099   31894 default_sa.go:55] duration metric: took 195.333777ms for default service account to be created ...
	I0829 18:27:37.780106   31894 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:27:37.975539   31894 request.go:632] Waited for 195.372955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:27:37.975592   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:27:37.975598   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:37.975605   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:37.975610   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:37.980062   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:27:37.984010   31894 system_pods.go:86] 17 kube-system pods found
	I0829 18:27:37.984039   31894 system_pods.go:89] "coredns-6f6b679f8f-nw2x2" [ab54ce43-4bd7-43ff-aad9-5cac2beb035b] Running
	I0829 18:27:37.984044   31894 system_pods.go:89] "coredns-6f6b679f8f-qhxm5" [286ec4e7-9401-4bdd-b8b2-86f00f130fc2] Running
	I0829 18:27:37.984048   31894 system_pods.go:89] "etcd-ha-782425" [743c3f2f-c86c-4f74-a7ef-9c95c0af0857] Running
	I0829 18:27:37.984052   31894 system_pods.go:89] "etcd-ha-782425-m02" [e70a5056-2675-48cf-8275-a630a1086c60] Running
	I0829 18:27:37.984055   31894 system_pods.go:89] "kindnet-7l5kn" [1a9ac71b-acaf-4ac9-b330-943525137d23] Running
	I0829 18:27:37.984058   31894 system_pods.go:89] "kindnet-kw2zk" [61a4cb33-47d5-4dd2-8711-d2524cf1133c] Running
	I0829 18:27:37.984062   31894 system_pods.go:89] "kube-apiserver-ha-782425" [b51e7db3-35e5-4e46-aeb4-9e98bfecd2a3] Running
	I0829 18:27:37.984065   31894 system_pods.go:89] "kube-apiserver-ha-782425-m02" [c1faa8f8-b5fd-41e7-bee3-dcdd6f4f06cc] Running
	I0829 18:27:37.984069   31894 system_pods.go:89] "kube-controller-manager-ha-782425" [008c32bf-b8f4-4cbe-a550-3820a3980f8f] Running
	I0829 18:27:37.984074   31894 system_pods.go:89] "kube-controller-manager-ha-782425-m02" [fcfc6d1d-ef6d-4b04-a86f-08d92de0883e] Running
	I0829 18:27:37.984077   31894 system_pods.go:89] "kube-proxy-5k8xr" [d07a092c-2a97-4bc5-ba9e-f0bf1022df8e] Running
	I0829 18:27:37.984080   31894 system_pods.go:89] "kube-proxy-d5kbx" [9033b7fd-0da5-4558-8c52-0ba06a7a4704] Running
	I0829 18:27:37.984083   31894 system_pods.go:89] "kube-scheduler-ha-782425" [72ba768c-61dd-4c95-a640-cdc3782b6f4c] Running
	I0829 18:27:37.984087   31894 system_pods.go:89] "kube-scheduler-ha-782425-m02" [56fa0075-25e4-42b7-b7b1-1b6d55643fcd] Running
	I0829 18:27:37.984092   31894 system_pods.go:89] "kube-vip-ha-782425" [83b3c3eb-b05b-47de-bc2a-ee1822b50b77] Running
	I0829 18:27:37.984097   31894 system_pods.go:89] "kube-vip-ha-782425-m02" [9655f7bc-ba21-4a7b-b223-18e52c655972] Running
	I0829 18:27:37.984100   31894 system_pods.go:89] "storage-provisioner" [f41ebca1-035e-44b0-96a2-3aa1e794bc1f] Running
	I0829 18:27:37.984109   31894 system_pods.go:126] duration metric: took 203.998182ms to wait for k8s-apps to be running ...
	I0829 18:27:37.984118   31894 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:27:37.984158   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:27:37.998975   31894 system_svc.go:56] duration metric: took 14.842358ms WaitForService to wait for kubelet
	I0829 18:27:37.999011   31894 kubeadm.go:582] duration metric: took 22.136338987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:27:37.999034   31894 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:27:38.175474   31894 request.go:632] Waited for 176.363823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes
	I0829 18:27:38.175542   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes
	I0829 18:27:38.175547   31894 round_trippers.go:469] Request Headers:
	I0829 18:27:38.175555   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:27:38.175558   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:27:38.178967   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:27:38.179631   31894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:27:38.179654   31894 node_conditions.go:123] node cpu capacity is 2
	I0829 18:27:38.179663   31894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:27:38.179667   31894 node_conditions.go:123] node cpu capacity is 2
	I0829 18:27:38.179671   31894 node_conditions.go:105] duration metric: took 180.632421ms to run NodePressure ...
	I0829 18:27:38.179681   31894 start.go:241] waiting for startup goroutines ...
	I0829 18:27:38.179704   31894 start.go:255] writing updated cluster config ...
	I0829 18:27:38.181914   31894 out.go:201] 
	I0829 18:27:38.183620   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:27:38.183712   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:27:38.185446   31894 out.go:177] * Starting "ha-782425-m03" control-plane node in "ha-782425" cluster
	I0829 18:27:38.186630   31894 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:27:38.186655   31894 cache.go:56] Caching tarball of preloaded images
	I0829 18:27:38.186768   31894 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:27:38.186782   31894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:27:38.186867   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:27:38.187024   31894 start.go:360] acquireMachinesLock for ha-782425-m03: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:27:38.187066   31894 start.go:364] duration metric: took 24.034µs to acquireMachinesLock for "ha-782425-m03"
	I0829 18:27:38.187088   31894 start.go:93] Provisioning new machine with config: &{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:27:38.187190   31894 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0829 18:27:38.188663   31894 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 18:27:38.188741   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:27:38.188775   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:27:38.203687   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46729
	I0829 18:27:38.204082   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:27:38.204533   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:27:38.204555   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:27:38.204845   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:27:38.205056   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetMachineName
	I0829 18:27:38.205175   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:27:38.205366   31894 start.go:159] libmachine.API.Create for "ha-782425" (driver="kvm2")
	I0829 18:27:38.205393   31894 client.go:168] LocalClient.Create starting
	I0829 18:27:38.205421   31894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem
	I0829 18:27:38.205454   31894 main.go:141] libmachine: Decoding PEM data...
	I0829 18:27:38.205469   31894 main.go:141] libmachine: Parsing certificate...
	I0829 18:27:38.205514   31894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem
	I0829 18:27:38.205532   31894 main.go:141] libmachine: Decoding PEM data...
	I0829 18:27:38.205542   31894 main.go:141] libmachine: Parsing certificate...
	I0829 18:27:38.205563   31894 main.go:141] libmachine: Running pre-create checks...
	I0829 18:27:38.205570   31894 main.go:141] libmachine: (ha-782425-m03) Calling .PreCreateCheck
	I0829 18:27:38.205701   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetConfigRaw
	I0829 18:27:38.206078   31894 main.go:141] libmachine: Creating machine...
	I0829 18:27:38.206106   31894 main.go:141] libmachine: (ha-782425-m03) Calling .Create
	I0829 18:27:38.206218   31894 main.go:141] libmachine: (ha-782425-m03) Creating KVM machine...
	I0829 18:27:38.207453   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found existing default KVM network
	I0829 18:27:38.207610   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found existing private KVM network mk-ha-782425
	I0829 18:27:38.207753   31894 main.go:141] libmachine: (ha-782425-m03) Setting up store path in /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03 ...
	I0829 18:27:38.207778   31894 main.go:141] libmachine: (ha-782425-m03) Building disk image from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 18:27:38.207833   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:38.207745   32645 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:27:38.207995   31894 main.go:141] libmachine: (ha-782425-m03) Downloading /home/jenkins/minikube-integration/19531-13056/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 18:27:38.434867   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:38.434751   32645 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa...
	I0829 18:27:39.031080   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:39.030952   32645 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/ha-782425-m03.rawdisk...
	I0829 18:27:39.031119   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Writing magic tar header
	I0829 18:27:39.031134   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Writing SSH key tar header
	I0829 18:27:39.031147   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:39.031066   32645 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03 ...
	I0829 18:27:39.031165   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03
	I0829 18:27:39.031209   31894 main.go:141] libmachine: (ha-782425-m03) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03 (perms=drwx------)
	I0829 18:27:39.031234   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines
	I0829 18:27:39.031245   31894 main.go:141] libmachine: (ha-782425-m03) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines (perms=drwxr-xr-x)
	I0829 18:27:39.031258   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:27:39.031271   31894 main.go:141] libmachine: (ha-782425-m03) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube (perms=drwxr-xr-x)
	I0829 18:27:39.031287   31894 main.go:141] libmachine: (ha-782425-m03) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056 (perms=drwxrwxr-x)
	I0829 18:27:39.031298   31894 main.go:141] libmachine: (ha-782425-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 18:27:39.031310   31894 main.go:141] libmachine: (ha-782425-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 18:27:39.031318   31894 main.go:141] libmachine: (ha-782425-m03) Creating domain...
	I0829 18:27:39.031329   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056
	I0829 18:27:39.031356   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 18:27:39.031367   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home/jenkins
	I0829 18:27:39.031377   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Checking permissions on dir: /home
	I0829 18:27:39.031385   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Skipping /home - not owner
	I0829 18:27:39.032300   31894 main.go:141] libmachine: (ha-782425-m03) define libvirt domain using xml: 
	I0829 18:27:39.032322   31894 main.go:141] libmachine: (ha-782425-m03) <domain type='kvm'>
	I0829 18:27:39.032329   31894 main.go:141] libmachine: (ha-782425-m03)   <name>ha-782425-m03</name>
	I0829 18:27:39.032344   31894 main.go:141] libmachine: (ha-782425-m03)   <memory unit='MiB'>2200</memory>
	I0829 18:27:39.032352   31894 main.go:141] libmachine: (ha-782425-m03)   <vcpu>2</vcpu>
	I0829 18:27:39.032359   31894 main.go:141] libmachine: (ha-782425-m03)   <features>
	I0829 18:27:39.032368   31894 main.go:141] libmachine: (ha-782425-m03)     <acpi/>
	I0829 18:27:39.032379   31894 main.go:141] libmachine: (ha-782425-m03)     <apic/>
	I0829 18:27:39.032387   31894 main.go:141] libmachine: (ha-782425-m03)     <pae/>
	I0829 18:27:39.032392   31894 main.go:141] libmachine: (ha-782425-m03)     
	I0829 18:27:39.032396   31894 main.go:141] libmachine: (ha-782425-m03)   </features>
	I0829 18:27:39.032401   31894 main.go:141] libmachine: (ha-782425-m03)   <cpu mode='host-passthrough'>
	I0829 18:27:39.032451   31894 main.go:141] libmachine: (ha-782425-m03)   
	I0829 18:27:39.032470   31894 main.go:141] libmachine: (ha-782425-m03)   </cpu>
	I0829 18:27:39.032481   31894 main.go:141] libmachine: (ha-782425-m03)   <os>
	I0829 18:27:39.032491   31894 main.go:141] libmachine: (ha-782425-m03)     <type>hvm</type>
	I0829 18:27:39.032502   31894 main.go:141] libmachine: (ha-782425-m03)     <boot dev='cdrom'/>
	I0829 18:27:39.032520   31894 main.go:141] libmachine: (ha-782425-m03)     <boot dev='hd'/>
	I0829 18:27:39.032533   31894 main.go:141] libmachine: (ha-782425-m03)     <bootmenu enable='no'/>
	I0829 18:27:39.032553   31894 main.go:141] libmachine: (ha-782425-m03)   </os>
	I0829 18:27:39.032567   31894 main.go:141] libmachine: (ha-782425-m03)   <devices>
	I0829 18:27:39.032577   31894 main.go:141] libmachine: (ha-782425-m03)     <disk type='file' device='cdrom'>
	I0829 18:27:39.032657   31894 main.go:141] libmachine: (ha-782425-m03)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/boot2docker.iso'/>
	I0829 18:27:39.032673   31894 main.go:141] libmachine: (ha-782425-m03)       <target dev='hdc' bus='scsi'/>
	I0829 18:27:39.032679   31894 main.go:141] libmachine: (ha-782425-m03)       <readonly/>
	I0829 18:27:39.032686   31894 main.go:141] libmachine: (ha-782425-m03)     </disk>
	I0829 18:27:39.032692   31894 main.go:141] libmachine: (ha-782425-m03)     <disk type='file' device='disk'>
	I0829 18:27:39.032702   31894 main.go:141] libmachine: (ha-782425-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 18:27:39.032712   31894 main.go:141] libmachine: (ha-782425-m03)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/ha-782425-m03.rawdisk'/>
	I0829 18:27:39.032720   31894 main.go:141] libmachine: (ha-782425-m03)       <target dev='hda' bus='virtio'/>
	I0829 18:27:39.032727   31894 main.go:141] libmachine: (ha-782425-m03)     </disk>
	I0829 18:27:39.032735   31894 main.go:141] libmachine: (ha-782425-m03)     <interface type='network'>
	I0829 18:27:39.032748   31894 main.go:141] libmachine: (ha-782425-m03)       <source network='mk-ha-782425'/>
	I0829 18:27:39.032759   31894 main.go:141] libmachine: (ha-782425-m03)       <model type='virtio'/>
	I0829 18:27:39.032770   31894 main.go:141] libmachine: (ha-782425-m03)     </interface>
	I0829 18:27:39.032780   31894 main.go:141] libmachine: (ha-782425-m03)     <interface type='network'>
	I0829 18:27:39.032792   31894 main.go:141] libmachine: (ha-782425-m03)       <source network='default'/>
	I0829 18:27:39.032803   31894 main.go:141] libmachine: (ha-782425-m03)       <model type='virtio'/>
	I0829 18:27:39.032813   31894 main.go:141] libmachine: (ha-782425-m03)     </interface>
	I0829 18:27:39.032846   31894 main.go:141] libmachine: (ha-782425-m03)     <serial type='pty'>
	I0829 18:27:39.032872   31894 main.go:141] libmachine: (ha-782425-m03)       <target port='0'/>
	I0829 18:27:39.032885   31894 main.go:141] libmachine: (ha-782425-m03)     </serial>
	I0829 18:27:39.032907   31894 main.go:141] libmachine: (ha-782425-m03)     <console type='pty'>
	I0829 18:27:39.032918   31894 main.go:141] libmachine: (ha-782425-m03)       <target type='serial' port='0'/>
	I0829 18:27:39.032931   31894 main.go:141] libmachine: (ha-782425-m03)     </console>
	I0829 18:27:39.032941   31894 main.go:141] libmachine: (ha-782425-m03)     <rng model='virtio'>
	I0829 18:27:39.032948   31894 main.go:141] libmachine: (ha-782425-m03)       <backend model='random'>/dev/random</backend>
	I0829 18:27:39.032960   31894 main.go:141] libmachine: (ha-782425-m03)     </rng>
	I0829 18:27:39.032970   31894 main.go:141] libmachine: (ha-782425-m03)     
	I0829 18:27:39.032981   31894 main.go:141] libmachine: (ha-782425-m03)     
	I0829 18:27:39.032996   31894 main.go:141] libmachine: (ha-782425-m03)   </devices>
	I0829 18:27:39.033007   31894 main.go:141] libmachine: (ha-782425-m03) </domain>
	I0829 18:27:39.033016   31894 main.go:141] libmachine: (ha-782425-m03) 
	I0829 18:27:39.039862   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:87:fd:da in network default
	I0829 18:27:39.040474   31894 main.go:141] libmachine: (ha-782425-m03) Ensuring networks are active...
	I0829 18:27:39.040503   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:39.041141   31894 main.go:141] libmachine: (ha-782425-m03) Ensuring network default is active
	I0829 18:27:39.041412   31894 main.go:141] libmachine: (ha-782425-m03) Ensuring network mk-ha-782425 is active
	I0829 18:27:39.041760   31894 main.go:141] libmachine: (ha-782425-m03) Getting domain xml...
	I0829 18:27:39.042459   31894 main.go:141] libmachine: (ha-782425-m03) Creating domain...
	I0829 18:27:40.284792   31894 main.go:141] libmachine: (ha-782425-m03) Waiting to get IP...
	I0829 18:27:40.285537   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:40.286073   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:40.286113   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:40.286065   32645 retry.go:31] will retry after 295.874325ms: waiting for machine to come up
	I0829 18:27:40.583804   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:40.584416   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:40.584452   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:40.584354   32645 retry.go:31] will retry after 349.576346ms: waiting for machine to come up
	I0829 18:27:40.935822   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:40.936255   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:40.936280   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:40.936208   32645 retry.go:31] will retry after 474.929638ms: waiting for machine to come up
	I0829 18:27:41.412903   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:41.413367   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:41.413394   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:41.413338   32645 retry.go:31] will retry after 540.983998ms: waiting for machine to come up
	I0829 18:27:41.956126   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:41.956649   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:41.956685   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:41.956599   32645 retry.go:31] will retry after 711.407523ms: waiting for machine to come up
	I0829 18:27:42.669344   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:42.669731   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:42.669759   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:42.669680   32645 retry.go:31] will retry after 803.960124ms: waiting for machine to come up
	I0829 18:27:43.475342   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:43.475775   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:43.475804   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:43.475738   32645 retry.go:31] will retry after 949.957391ms: waiting for machine to come up
	I0829 18:27:44.426840   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:44.427252   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:44.427276   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:44.427199   32645 retry.go:31] will retry after 1.186719918s: waiting for machine to come up
	I0829 18:27:45.615314   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:45.615690   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:45.615720   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:45.615636   32645 retry.go:31] will retry after 1.7690001s: waiting for machine to come up
	I0829 18:27:47.385868   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:47.386335   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:47.386364   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:47.386294   32645 retry.go:31] will retry after 1.504430849s: waiting for machine to come up
	I0829 18:27:48.891994   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:48.892463   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:48.892495   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:48.892411   32645 retry.go:31] will retry after 2.537725233s: waiting for machine to come up
	I0829 18:27:51.433157   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:51.433635   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:51.433658   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:51.433589   32645 retry.go:31] will retry after 2.650154903s: waiting for machine to come up
	I0829 18:27:54.085317   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:54.085702   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find current IP address of domain ha-782425-m03 in network mk-ha-782425
	I0829 18:27:54.085728   31894 main.go:141] libmachine: (ha-782425-m03) DBG | I0829 18:27:54.085655   32645 retry.go:31] will retry after 4.258795447s: waiting for machine to come up
	I0829 18:27:58.345916   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:58.346295   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has current primary IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:58.346321   31894 main.go:141] libmachine: (ha-782425-m03) Found IP for machine: 192.168.39.220
	I0829 18:27:58.346337   31894 main.go:141] libmachine: (ha-782425-m03) Reserving static IP address...
	I0829 18:27:58.346666   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find host DHCP lease matching {name: "ha-782425-m03", mac: "52:54:00:b5:78:f3", ip: "192.168.39.220"} in network mk-ha-782425
	I0829 18:27:58.418121   31894 main.go:141] libmachine: (ha-782425-m03) Reserved static IP address: 192.168.39.220
	I0829 18:27:58.418146   31894 main.go:141] libmachine: (ha-782425-m03) Waiting for SSH to be available...
	I0829 18:27:58.418187   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Getting to WaitForSSH function...
	I0829 18:27:58.420469   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:27:58.420795   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425
	I0829 18:27:58.420829   31894 main.go:141] libmachine: (ha-782425-m03) DBG | unable to find defined IP address of network mk-ha-782425 interface with MAC address 52:54:00:b5:78:f3
	I0829 18:27:58.420969   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Using SSH client type: external
	I0829 18:27:58.420993   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa (-rw-------)
	I0829 18:27:58.421022   31894 main.go:141] libmachine: (ha-782425-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:27:58.421036   31894 main.go:141] libmachine: (ha-782425-m03) DBG | About to run SSH command:
	I0829 18:27:58.421049   31894 main.go:141] libmachine: (ha-782425-m03) DBG | exit 0
	I0829 18:27:58.424711   31894 main.go:141] libmachine: (ha-782425-m03) DBG | SSH cmd err, output: exit status 255: 
	I0829 18:27:58.424735   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0829 18:27:58.424745   31894 main.go:141] libmachine: (ha-782425-m03) DBG | command : exit 0
	I0829 18:27:58.424756   31894 main.go:141] libmachine: (ha-782425-m03) DBG | err     : exit status 255
	I0829 18:27:58.424765   31894 main.go:141] libmachine: (ha-782425-m03) DBG | output  : 
	I0829 18:28:01.426845   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Getting to WaitForSSH function...
	I0829 18:28:01.429210   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.429521   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.429560   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.429686   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Using SSH client type: external
	I0829 18:28:01.429710   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa (-rw-------)
	I0829 18:28:01.429747   31894 main.go:141] libmachine: (ha-782425-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 18:28:01.429765   31894 main.go:141] libmachine: (ha-782425-m03) DBG | About to run SSH command:
	I0829 18:28:01.429778   31894 main.go:141] libmachine: (ha-782425-m03) DBG | exit 0
	I0829 18:28:01.553920   31894 main.go:141] libmachine: (ha-782425-m03) DBG | SSH cmd err, output: <nil>: 
	I0829 18:28:01.554185   31894 main.go:141] libmachine: (ha-782425-m03) KVM machine creation complete!
	I0829 18:28:01.554539   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetConfigRaw
	I0829 18:28:01.555039   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:01.555233   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:01.555399   31894 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 18:28:01.555414   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetState
	I0829 18:28:01.556736   31894 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 18:28:01.556749   31894 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 18:28:01.556754   31894 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 18:28:01.556760   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:01.558787   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.559126   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.559151   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.559276   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:01.559425   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.559571   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.559705   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:01.559846   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:28:01.560088   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0829 18:28:01.560103   31894 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 18:28:01.657214   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:28:01.657236   31894 main.go:141] libmachine: Detecting the provisioner...
	I0829 18:28:01.657246   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:01.660034   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.660406   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.660434   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.660580   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:01.660751   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.660914   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.661076   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:01.661222   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:28:01.661384   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0829 18:28:01.661394   31894 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 18:28:01.758625   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 18:28:01.758708   31894 main.go:141] libmachine: found compatible host: buildroot
	I0829 18:28:01.758722   31894 main.go:141] libmachine: Provisioning with buildroot...
	I0829 18:28:01.758733   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetMachineName
	I0829 18:28:01.758977   31894 buildroot.go:166] provisioning hostname "ha-782425-m03"
	I0829 18:28:01.758997   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetMachineName
	I0829 18:28:01.759168   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:01.761812   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.762222   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.762244   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.762404   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:01.762553   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.762702   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.762832   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:01.762990   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:28:01.763141   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0829 18:28:01.763152   31894 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-782425-m03 && echo "ha-782425-m03" | sudo tee /etc/hostname
	I0829 18:28:01.871627   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-782425-m03
	
	I0829 18:28:01.871658   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:01.874406   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.874839   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.874872   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.875012   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:01.875212   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.875367   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:01.875528   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:01.875723   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:28:01.875921   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0829 18:28:01.875943   31894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-782425-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-782425-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-782425-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:28:01.978192   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:28:01.978221   31894 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:28:01.978240   31894 buildroot.go:174] setting up certificates
	I0829 18:28:01.978252   31894 provision.go:84] configureAuth start
	I0829 18:28:01.978263   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetMachineName
	I0829 18:28:01.978529   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:28:01.981151   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.981561   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.981582   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.981777   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:01.983874   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.984210   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:01.984236   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:01.984374   31894 provision.go:143] copyHostCerts
	I0829 18:28:01.984406   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:28:01.984452   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 18:28:01.984463   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:28:01.984532   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:28:01.984635   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:28:01.984660   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 18:28:01.984670   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:28:01.984708   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:28:01.984770   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:28:01.984797   31894 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 18:28:01.984805   31894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:28:01.984836   31894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:28:01.984919   31894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.ha-782425-m03 san=[127.0.0.1 192.168.39.220 ha-782425-m03 localhost minikube]
	I0829 18:28:02.246243   31894 provision.go:177] copyRemoteCerts
	I0829 18:28:02.246297   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:28:02.246376   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:02.248992   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.249348   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.249377   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.249505   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:02.249710   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.249845   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:02.249993   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:28:02.327997   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 18:28:02.328103   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:28:02.353504   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 18:28:02.353575   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:28:02.377505   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 18:28:02.377584   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:28:02.400633   31894 provision.go:87] duration metric: took 422.367175ms to configureAuth
	I0829 18:28:02.400665   31894 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:28:02.400854   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:28:02.400922   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:02.403375   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.403770   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.403799   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.403901   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:02.404140   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.404305   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.404443   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:02.404613   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:28:02.404822   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0829 18:28:02.404843   31894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:28:02.622069   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:28:02.622110   31894 main.go:141] libmachine: Checking connection to Docker...
	I0829 18:28:02.622121   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetURL
	I0829 18:28:02.623387   31894 main.go:141] libmachine: (ha-782425-m03) DBG | Using libvirt version 6000000
	I0829 18:28:02.625466   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.625803   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.625823   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.626006   31894 main.go:141] libmachine: Docker is up and running!
	I0829 18:28:02.626025   31894 main.go:141] libmachine: Reticulating splines...
	I0829 18:28:02.626032   31894 client.go:171] duration metric: took 24.420632742s to LocalClient.Create
	I0829 18:28:02.626053   31894 start.go:167] duration metric: took 24.420688809s to libmachine.API.Create "ha-782425"
	I0829 18:28:02.626062   31894 start.go:293] postStartSetup for "ha-782425-m03" (driver="kvm2")
	I0829 18:28:02.626070   31894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:28:02.626104   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:02.626333   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:28:02.626366   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:02.628445   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.628766   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.628791   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.628922   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:02.629087   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.629219   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:02.629331   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:28:02.708657   31894 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:28:02.712593   31894 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:28:02.712615   31894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 18:28:02.712673   31894 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 18:28:02.712741   31894 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 18:28:02.712750   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /etc/ssl/certs/202592.pem
	I0829 18:28:02.712826   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 18:28:02.722183   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:28:02.746168   31894 start.go:296] duration metric: took 120.091913ms for postStartSetup
	I0829 18:28:02.746237   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetConfigRaw
	I0829 18:28:02.746836   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:28:02.749600   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.750012   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.750042   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.750378   31894 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:28:02.750629   31894 start.go:128] duration metric: took 24.563428836s to createHost
	I0829 18:28:02.750658   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:02.753152   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.753505   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.753533   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.753721   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:02.753906   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.754061   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.754209   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:02.754364   31894 main.go:141] libmachine: Using SSH client type: native
	I0829 18:28:02.754538   31894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0829 18:28:02.754550   31894 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:28:02.850607   31894 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724956082.826343446
	
	I0829 18:28:02.850631   31894 fix.go:216] guest clock: 1724956082.826343446
	I0829 18:28:02.850641   31894 fix.go:229] Guest: 2024-08-29 18:28:02.826343446 +0000 UTC Remote: 2024-08-29 18:28:02.750643528 +0000 UTC m=+144.918639060 (delta=75.699918ms)
	I0829 18:28:02.850670   31894 fix.go:200] guest clock delta is within tolerance: 75.699918ms
	I0829 18:28:02.850681   31894 start.go:83] releasing machines lock for "ha-782425-m03", held for 24.663603239s
	I0829 18:28:02.850710   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:02.851009   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:28:02.854120   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.854546   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.854573   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.856517   31894 out.go:177] * Found network options:
	I0829 18:28:02.857741   31894 out.go:177]   - NO_PROXY=192.168.39.39,192.168.39.253
	W0829 18:28:02.859050   31894 proxy.go:119] fail to check proxy env: Error ip not in block
	W0829 18:28:02.859077   31894 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 18:28:02.859094   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:02.859605   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:02.859791   31894 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:28:02.859876   31894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:28:02.859917   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	W0829 18:28:02.859982   31894 proxy.go:119] fail to check proxy env: Error ip not in block
	W0829 18:28:02.860005   31894 proxy.go:119] fail to check proxy env: Error ip not in block
	I0829 18:28:02.860062   31894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:28:02.860082   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:28:02.862414   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.862781   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.862807   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.862827   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.862998   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:02.863155   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.863292   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:02.863330   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:02.863394   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:02.863455   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:28:02.863511   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:28:02.863606   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:28:02.863722   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:28:02.863855   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:28:03.087651   31894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:28:03.094619   31894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:28:03.094686   31894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:28:03.109806   31894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 18:28:03.109831   31894 start.go:495] detecting cgroup driver to use...
	I0829 18:28:03.109913   31894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:28:03.126690   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:28:03.142265   31894 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:28:03.142319   31894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:28:03.156210   31894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:28:03.169742   31894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:28:03.278641   31894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:28:03.431999   31894 docker.go:233] disabling docker service ...
	I0829 18:28:03.432062   31894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:28:03.445416   31894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:28:03.457051   31894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:28:03.577740   31894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:28:03.692002   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:28:03.706207   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:28:03.723020   31894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:28:03.723077   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.734591   31894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:28:03.734655   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.744783   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.754403   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.766763   31894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:28:03.778511   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.788947   31894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.805748   31894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:28:03.815930   31894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:28:03.824744   31894 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 18:28:03.824798   31894 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 18:28:03.837350   31894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:28:03.845996   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:28:03.957638   31894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:28:04.044780   31894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:28:04.044862   31894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:28:04.049116   31894 start.go:563] Will wait 60s for crictl version
	I0829 18:28:04.049174   31894 ssh_runner.go:195] Run: which crictl
	I0829 18:28:04.052467   31894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:28:04.091186   31894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:28:04.091265   31894 ssh_runner.go:195] Run: crio --version
	I0829 18:28:04.122455   31894 ssh_runner.go:195] Run: crio --version
	I0829 18:28:04.152483   31894 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:28:04.153744   31894 out.go:177]   - env NO_PROXY=192.168.39.39
	I0829 18:28:04.154982   31894 out.go:177]   - env NO_PROXY=192.168.39.39,192.168.39.253
	I0829 18:28:04.156108   31894 main.go:141] libmachine: (ha-782425-m03) Calling .GetIP
	I0829 18:28:04.159054   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:04.159540   31894 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:28:04.159576   31894 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:28:04.159747   31894 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:28:04.163668   31894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:28:04.175622   31894 mustload.go:65] Loading cluster: ha-782425
	I0829 18:28:04.175901   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:28:04.176177   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:28:04.176211   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:28:04.191663   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
	I0829 18:28:04.192143   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:28:04.192585   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:28:04.192621   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:28:04.193002   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:28:04.193191   31894 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:28:04.194781   31894 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:28:04.195118   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:28:04.195161   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:28:04.209854   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46771
	I0829 18:28:04.210268   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:28:04.210790   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:28:04.210810   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:28:04.211200   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:28:04.211506   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:28:04.211690   31894 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425 for IP: 192.168.39.220
	I0829 18:28:04.211704   31894 certs.go:194] generating shared ca certs ...
	I0829 18:28:04.211717   31894 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:28:04.211836   31894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 18:28:04.211871   31894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 18:28:04.211880   31894 certs.go:256] generating profile certs ...
	I0829 18:28:04.211952   31894 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key
	I0829 18:28:04.211975   31894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.dfe88847
	I0829 18:28:04.211989   31894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.dfe88847 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.39 192.168.39.253 192.168.39.220 192.168.39.254]
	I0829 18:28:04.348270   31894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.dfe88847 ...
	I0829 18:28:04.348307   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.dfe88847: {Name:mk14139edb6a62e8e4d43837fb216554daa427a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:28:04.348503   31894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.dfe88847 ...
	I0829 18:28:04.348520   31894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.dfe88847: {Name:mk7bfcdc5e7a3699a316207b281b7344bc61aee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:28:04.348624   31894 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.dfe88847 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt
	I0829 18:28:04.348793   31894 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.dfe88847 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key
	I0829 18:28:04.348965   31894 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key
	I0829 18:28:04.348983   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 18:28:04.349001   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 18:28:04.349020   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 18:28:04.349060   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 18:28:04.349077   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 18:28:04.349091   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 18:28:04.349107   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 18:28:04.349124   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 18:28:04.349198   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 18:28:04.349241   31894 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 18:28:04.349254   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:28:04.349288   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:28:04.349320   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:28:04.349352   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 18:28:04.349406   31894 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:28:04.349446   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:28:04.349466   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem -> /usr/share/ca-certificates/20259.pem
	I0829 18:28:04.349484   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /usr/share/ca-certificates/202592.pem
	I0829 18:28:04.349524   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:28:04.352479   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:28:04.352867   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:28:04.352896   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:28:04.353149   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:28:04.353352   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:28:04.353490   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:28:04.353617   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:28:04.430392   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0829 18:28:04.435394   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0829 18:28:04.446366   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0829 18:28:04.450926   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0829 18:28:04.460806   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0829 18:28:04.464472   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0829 18:28:04.483922   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0829 18:28:04.488739   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0829 18:28:04.500498   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0829 18:28:04.504343   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0829 18:28:04.513600   31894 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0829 18:28:04.517538   31894 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0829 18:28:04.527506   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:28:04.551092   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:28:04.572762   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:28:04.597717   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:28:04.619742   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0829 18:28:04.640777   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 18:28:04.662127   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:28:04.683766   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:28:04.706856   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:28:04.731433   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 18:28:04.753760   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 18:28:04.776862   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0829 18:28:04.792923   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0829 18:28:04.808730   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0829 18:28:04.824668   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0829 18:28:04.840759   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0829 18:28:04.856003   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0829 18:28:04.870873   31894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0829 18:28:04.886451   31894 ssh_runner.go:195] Run: openssl version
	I0829 18:28:04.891673   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:28:04.902385   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:28:04.907147   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:28:04.907207   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:28:04.912620   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:28:04.922151   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 18:28:04.932091   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 18:28:04.936231   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 18:28:04.936287   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 18:28:04.941423   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 18:28:04.951794   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 18:28:04.961377   31894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 18:28:04.965411   31894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 18:28:04.965469   31894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 18:28:04.970625   31894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 18:28:04.980209   31894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:28:04.983763   31894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:28:04.983808   31894 kubeadm.go:934] updating node {m03 192.168.39.220 8443 v1.31.0 crio true true} ...
	I0829 18:28:04.983895   31894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-782425-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:28:04.983923   31894 kube-vip.go:115] generating kube-vip config ...
	I0829 18:28:04.983958   31894 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 18:28:05.000225   31894 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 18:28:05.000296   31894 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0829 18:28:05.000356   31894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:28:05.009427   31894 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0829 18:28:05.009485   31894 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0829 18:28:05.018082   31894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0829 18:28:05.018112   31894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0829 18:28:05.018082   31894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0829 18:28:05.018126   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 18:28:05.018155   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:28:05.018159   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 18:28:05.018217   31894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0829 18:28:05.018252   31894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0829 18:28:05.035607   31894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0829 18:28:05.035616   31894 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 18:28:05.035664   31894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0829 18:28:05.035689   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0829 18:28:05.035731   31894 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0829 18:28:05.035651   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0829 18:28:05.066515   31894 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0829 18:28:05.066546   31894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0829 18:28:05.866110   31894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0829 18:28:05.874648   31894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0829 18:28:05.890771   31894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:28:05.905587   31894 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0829 18:28:05.920867   31894 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 18:28:05.924680   31894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:28:05.935968   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:28:06.040663   31894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:28:06.056566   31894 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:28:06.056969   31894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:28:06.057017   31894 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:28:06.072764   31894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40249
	I0829 18:28:06.073174   31894 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:28:06.073647   31894 main.go:141] libmachine: Using API Version  1
	I0829 18:28:06.073669   31894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:28:06.073958   31894 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:28:06.074168   31894 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:28:06.074330   31894 start.go:317] joinCluster: &{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:28:06.074492   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0829 18:28:06.074512   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:28:06.077448   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:28:06.077890   31894 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:28:06.077917   31894 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:28:06.078021   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:28:06.078193   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:28:06.078373   31894 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:28:06.078524   31894 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:28:06.229649   31894 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:28:06.229711   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9l1oah.336h28y6daulw1a3 --discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-782425-m03 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443"
	I0829 18:28:29.021848   31894 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9l1oah.336h28y6daulw1a3 --discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-782425-m03 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443": (22.792103025s)
	I0829 18:28:29.021899   31894 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0829 18:28:29.689851   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-782425-m03 minikube.k8s.io/updated_at=2024_08_29T18_28_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=ha-782425 minikube.k8s.io/primary=false
	I0829 18:28:29.817880   31894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-782425-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0829 18:28:29.937397   31894 start.go:319] duration metric: took 23.863062158s to joinCluster
	I0829 18:28:29.937564   31894 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 18:28:29.937913   31894 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:28:29.938932   31894 out.go:177] * Verifying Kubernetes components...
	I0829 18:28:29.940500   31894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:28:30.196095   31894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:28:30.220231   31894 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:28:30.220593   31894 kapi.go:59] client config for ha-782425: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.crt", KeyFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key", CAFile:"/home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0829 18:28:30.220689   31894 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.39:8443
	I0829 18:28:30.221049   31894 node_ready.go:35] waiting up to 6m0s for node "ha-782425-m03" to be "Ready" ...
	I0829 18:28:30.221187   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:30.221200   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:30.221211   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:30.221218   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:30.224755   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:30.721330   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:30.721355   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:30.721367   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:30.721373   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:30.728584   31894 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0829 18:28:31.221367   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:31.221389   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:31.221401   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:31.221405   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:31.224755   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:31.721799   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:31.721824   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:31.721831   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:31.721835   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:31.725200   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:32.222133   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:32.222153   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:32.222161   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:32.222165   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:32.225507   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:32.225929   31894 node_ready.go:53] node "ha-782425-m03" has status "Ready":"False"
	I0829 18:28:32.721309   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:32.721334   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:32.721345   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:32.721351   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:32.725144   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:33.221227   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:33.221250   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:33.221262   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:33.221266   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:33.229418   31894 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0829 18:28:33.721432   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:33.721451   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:33.721457   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:33.721461   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:33.724816   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:34.221757   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:34.221781   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:34.221788   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:34.221792   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:34.224883   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:34.721339   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:34.721362   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:34.721373   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:34.721379   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:34.725423   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:28:34.726183   31894 node_ready.go:53] node "ha-782425-m03" has status "Ready":"False"
	I0829 18:28:35.221535   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:35.221557   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:35.221567   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:35.221578   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:35.224396   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:35.721928   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:35.721952   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:35.721961   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:35.721965   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:35.725715   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:36.222108   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:36.222135   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:36.222144   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:36.222151   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:36.225212   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:36.722020   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:36.722041   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:36.722049   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:36.722052   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:36.725279   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:37.222211   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:37.222234   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:37.222242   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:37.222247   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:37.225639   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:37.226238   31894 node_ready.go:53] node "ha-782425-m03" has status "Ready":"False"
	I0829 18:28:37.721548   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:37.721574   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:37.721587   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:37.721595   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:37.726891   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:28:38.221245   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:38.221272   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:38.221283   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:38.221288   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:38.224980   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:38.722210   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:38.722232   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:38.722240   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:38.722243   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:38.725861   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:39.221264   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:39.221285   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:39.221297   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:39.221302   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:39.224442   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:39.721756   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:39.721778   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:39.721785   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:39.721789   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:39.725432   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:39.726047   31894 node_ready.go:53] node "ha-782425-m03" has status "Ready":"False"
	I0829 18:28:40.221412   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:40.221436   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:40.221446   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:40.221453   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:40.224989   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:40.721984   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:40.722006   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:40.722014   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:40.722018   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:40.725151   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:41.221578   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:41.221601   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:41.221609   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:41.221612   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:41.224550   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:41.721614   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:41.721635   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:41.721646   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:41.721651   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:41.724956   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:42.221745   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:42.221772   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:42.221785   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:42.221791   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:42.224724   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:42.225407   31894 node_ready.go:53] node "ha-782425-m03" has status "Ready":"False"
	I0829 18:28:42.722270   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:42.722294   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:42.722302   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:42.722307   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:42.725463   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:43.221446   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:43.221466   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:43.221474   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:43.221478   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:43.224544   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:43.721514   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:43.721540   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:43.721549   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:43.721553   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:43.724824   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:44.221541   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:44.221563   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:44.221573   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:44.221579   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:44.225820   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:28:44.226444   31894 node_ready.go:53] node "ha-782425-m03" has status "Ready":"False"
	I0829 18:28:44.722232   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:44.722256   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:44.722266   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:44.722273   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:44.726293   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:28:45.221702   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:45.221724   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:45.221734   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:45.221742   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:45.225230   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:45.722155   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:45.722177   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:45.722185   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:45.722189   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:45.725813   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:46.221239   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:46.221262   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.221270   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.221276   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.225170   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:46.721677   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:46.721705   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.721715   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.721723   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.730104   31894 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0829 18:28:46.730837   31894 node_ready.go:49] node "ha-782425-m03" has status "Ready":"True"
	I0829 18:28:46.730866   31894 node_ready.go:38] duration metric: took 16.509796396s for node "ha-782425-m03" to be "Ready" ...
	I0829 18:28:46.730877   31894 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:28:46.730975   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:28:46.730989   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.730999   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.731003   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.743081   31894 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0829 18:28:46.751794   31894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-nw2x2" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.751909   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-nw2x2
	I0829 18:28:46.751921   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.751931   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.751943   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.757395   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:28:46.760165   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:46.760186   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.760196   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.760200   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.765275   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:28:46.765758   31894 pod_ready.go:93] pod "coredns-6f6b679f8f-nw2x2" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:46.765776   31894 pod_ready.go:82] duration metric: took 13.947729ms for pod "coredns-6f6b679f8f-nw2x2" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.765785   31894 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qhxm5" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.765836   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qhxm5
	I0829 18:28:46.765845   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.765852   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.765857   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.769497   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:46.770133   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:46.770147   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.770154   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.770158   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.773596   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:46.774413   31894 pod_ready.go:93] pod "coredns-6f6b679f8f-qhxm5" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:46.774431   31894 pod_ready.go:82] duration metric: took 8.64041ms for pod "coredns-6f6b679f8f-qhxm5" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.774440   31894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.774491   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/etcd-ha-782425
	I0829 18:28:46.774498   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.774505   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.774511   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.777301   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:46.777927   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:46.777946   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.777958   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.777963   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.780612   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:46.781398   31894 pod_ready.go:93] pod "etcd-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:46.781418   31894 pod_ready.go:82] duration metric: took 6.971235ms for pod "etcd-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.781430   31894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.781492   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/etcd-ha-782425-m02
	I0829 18:28:46.781502   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.781512   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.781521   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.784465   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:46.785348   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:46.785368   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.785377   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.785383   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.788415   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:46.788909   31894 pod_ready.go:93] pod "etcd-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:46.788930   31894 pod_ready.go:82] duration metric: took 7.491319ms for pod "etcd-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.788941   31894 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:46.922255   31894 request.go:632] Waited for 133.262473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/etcd-ha-782425-m03
	I0829 18:28:46.922315   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/etcd-ha-782425-m03
	I0829 18:28:46.922320   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:46.922327   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:46.922332   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:46.925911   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:47.121880   31894 request.go:632] Waited for 195.274268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:47.121948   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:47.121957   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:47.121964   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:47.121970   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:47.126052   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:28:47.126569   31894 pod_ready.go:93] pod "etcd-ha-782425-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:47.126587   31894 pod_ready.go:82] duration metric: took 337.639932ms for pod "etcd-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:47.126610   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:47.322691   31894 request.go:632] Waited for 196.016729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425
	I0829 18:28:47.322764   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425
	I0829 18:28:47.322770   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:47.322777   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:47.322781   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:47.326137   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:47.522154   31894 request.go:632] Waited for 195.372895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:47.522217   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:47.522225   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:47.522236   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:47.522244   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:47.525276   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:47.525822   31894 pod_ready.go:93] pod "kube-apiserver-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:47.525841   31894 pod_ready.go:82] duration metric: took 399.222875ms for pod "kube-apiserver-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:47.525853   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:47.721931   31894 request.go:632] Waited for 196.002454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425-m02
	I0829 18:28:47.721989   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425-m02
	I0829 18:28:47.721996   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:47.722010   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:47.722019   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:47.726474   31894 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0829 18:28:47.921944   31894 request.go:632] Waited for 194.787802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:47.921998   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:47.922004   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:47.922011   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:47.922015   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:47.925279   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:47.925797   31894 pod_ready.go:93] pod "kube-apiserver-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:47.925815   31894 pod_ready.go:82] duration metric: took 399.954449ms for pod "kube-apiserver-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:47.925825   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:48.122332   31894 request.go:632] Waited for 196.413935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425-m03
	I0829 18:28:48.122401   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-782425-m03
	I0829 18:28:48.122407   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:48.122417   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:48.122423   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:48.125290   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:48.322178   31894 request.go:632] Waited for 196.180445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:48.322242   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:48.322247   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:48.322253   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:48.322257   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:48.325601   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:48.326025   31894 pod_ready.go:93] pod "kube-apiserver-ha-782425-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:48.326045   31894 pod_ready.go:82] duration metric: took 400.213709ms for pod "kube-apiserver-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:48.326055   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:48.522038   31894 request.go:632] Waited for 195.915787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425
	I0829 18:28:48.522130   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425
	I0829 18:28:48.522137   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:48.522144   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:48.522147   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:48.525256   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:48.722392   31894 request.go:632] Waited for 196.381557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:48.722472   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:48.722477   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:48.722485   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:48.722490   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:48.725847   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:48.726715   31894 pod_ready.go:93] pod "kube-controller-manager-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:48.726733   31894 pod_ready.go:82] duration metric: took 400.672433ms for pod "kube-controller-manager-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:48.726743   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:48.921779   31894 request.go:632] Waited for 194.971702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425-m02
	I0829 18:28:48.921853   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425-m02
	I0829 18:28:48.921859   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:48.921866   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:48.921873   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:48.925541   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:49.122504   31894 request.go:632] Waited for 196.304236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:49.122631   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:49.122653   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:49.122661   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:49.122665   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:49.125446   31894 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0829 18:28:49.126172   31894 pod_ready.go:93] pod "kube-controller-manager-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:49.126197   31894 pod_ready.go:82] duration metric: took 399.447536ms for pod "kube-controller-manager-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:49.126214   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:49.321919   31894 request.go:632] Waited for 195.623601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425-m03
	I0829 18:28:49.321973   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-782425-m03
	I0829 18:28:49.321978   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:49.321985   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:49.321989   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:49.325493   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:49.522516   31894 request.go:632] Waited for 196.379616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:49.522587   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:49.522592   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:49.522600   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:49.522604   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:49.525854   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:49.526428   31894 pod_ready.go:93] pod "kube-controller-manager-ha-782425-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:49.526448   31894 pod_ready.go:82] duration metric: took 400.224639ms for pod "kube-controller-manager-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:49.526458   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5k8xr" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:49.721698   31894 request.go:632] Waited for 195.179732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5k8xr
	I0829 18:28:49.721776   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5k8xr
	I0829 18:28:49.721782   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:49.721789   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:49.721793   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:49.725248   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:49.922356   31894 request.go:632] Waited for 196.3754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:49.922406   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:49.922411   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:49.922419   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:49.922422   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:49.925654   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:49.926219   31894 pod_ready.go:93] pod "kube-proxy-5k8xr" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:49.926239   31894 pod_ready.go:82] duration metric: took 399.774718ms for pod "kube-proxy-5k8xr" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:49.926249   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d5kbx" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:50.122266   31894 request.go:632] Waited for 195.95942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5kbx
	I0829 18:28:50.122353   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5kbx
	I0829 18:28:50.122364   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:50.122375   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:50.122385   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:50.125962   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:50.322235   31894 request.go:632] Waited for 195.375684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:50.322320   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:50.322327   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:50.322334   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:50.322339   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:50.325864   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:50.326461   31894 pod_ready.go:93] pod "kube-proxy-d5kbx" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:50.326481   31894 pod_ready.go:82] duration metric: took 400.225563ms for pod "kube-proxy-d5kbx" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:50.326493   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vzss9" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:50.522571   31894 request.go:632] Waited for 195.985083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vzss9
	I0829 18:28:50.522635   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vzss9
	I0829 18:28:50.522643   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:50.522654   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:50.522661   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:50.525714   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:50.721758   31894 request.go:632] Waited for 195.287107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:50.721811   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:50.721818   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:50.721828   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:50.721834   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:50.725171   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:50.725797   31894 pod_ready.go:93] pod "kube-proxy-vzss9" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:50.725822   31894 pod_ready.go:82] duration metric: took 399.321762ms for pod "kube-proxy-vzss9" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:50.725835   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:50.921909   31894 request.go:632] Waited for 195.989287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425
	I0829 18:28:50.921974   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425
	I0829 18:28:50.921981   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:50.921992   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:50.922004   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:50.925258   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:51.122136   31894 request.go:632] Waited for 196.22971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:51.122197   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425
	I0829 18:28:51.122203   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:51.122221   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:51.122229   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:51.125766   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:51.126286   31894 pod_ready.go:93] pod "kube-scheduler-ha-782425" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:51.126307   31894 pod_ready.go:82] duration metric: took 400.464622ms for pod "kube-scheduler-ha-782425" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:51.126324   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:51.322314   31894 request.go:632] Waited for 195.931418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425-m02
	I0829 18:28:51.322368   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425-m02
	I0829 18:28:51.322374   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:51.322380   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:51.322384   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:51.325832   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:51.522729   31894 request.go:632] Waited for 196.285365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:51.522777   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m02
	I0829 18:28:51.522783   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:51.522789   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:51.522793   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:51.526109   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:51.526621   31894 pod_ready.go:93] pod "kube-scheduler-ha-782425-m02" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:51.526642   31894 pod_ready.go:82] duration metric: took 400.311007ms for pod "kube-scheduler-ha-782425-m02" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:51.526657   31894 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:51.722644   31894 request.go:632] Waited for 195.923513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425-m03
	I0829 18:28:51.722709   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-782425-m03
	I0829 18:28:51.722715   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:51.722722   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:51.722726   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:51.726006   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:51.922120   31894 request.go:632] Waited for 195.361975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:51.922187   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes/ha-782425-m03
	I0829 18:28:51.922195   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:51.922202   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:51.922206   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:51.925443   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:51.925926   31894 pod_ready.go:93] pod "kube-scheduler-ha-782425-m03" in "kube-system" namespace has status "Ready":"True"
	I0829 18:28:51.925944   31894 pod_ready.go:82] duration metric: took 399.278435ms for pod "kube-scheduler-ha-782425-m03" in "kube-system" namespace to be "Ready" ...
	I0829 18:28:51.925954   31894 pod_ready.go:39] duration metric: took 5.195065407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:28:51.925970   31894 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:28:51.926017   31894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:28:51.941439   31894 api_server.go:72] duration metric: took 22.003829538s to wait for apiserver process to appear ...
	I0829 18:28:51.941465   31894 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:28:51.941486   31894 api_server.go:253] Checking apiserver healthz at https://192.168.39.39:8443/healthz ...
	I0829 18:28:51.945619   31894 api_server.go:279] https://192.168.39.39:8443/healthz returned 200:
	ok
	I0829 18:28:51.945703   31894 round_trippers.go:463] GET https://192.168.39.39:8443/version
	I0829 18:28:51.945714   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:51.945724   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:51.945732   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:51.946661   31894 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0829 18:28:51.946718   31894 api_server.go:141] control plane version: v1.31.0
	I0829 18:28:51.946733   31894 api_server.go:131] duration metric: took 5.260491ms to wait for apiserver health ...
	I0829 18:28:51.946741   31894 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:28:52.122165   31894 request.go:632] Waited for 175.351343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:28:52.122254   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:28:52.122262   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:52.122273   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:52.122281   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:52.127609   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:28:52.135153   31894 system_pods.go:59] 24 kube-system pods found
	I0829 18:28:52.135192   31894 system_pods.go:61] "coredns-6f6b679f8f-nw2x2" [ab54ce43-4bd7-43ff-aad9-5cac2beb035b] Running
	I0829 18:28:52.135197   31894 system_pods.go:61] "coredns-6f6b679f8f-qhxm5" [286ec4e7-9401-4bdd-b8b2-86f00f130fc2] Running
	I0829 18:28:52.135200   31894 system_pods.go:61] "etcd-ha-782425" [743c3f2f-c86c-4f74-a7ef-9c95c0af0857] Running
	I0829 18:28:52.135203   31894 system_pods.go:61] "etcd-ha-782425-m02" [e70a5056-2675-48cf-8275-a630a1086c60] Running
	I0829 18:28:52.135206   31894 system_pods.go:61] "etcd-ha-782425-m03" [1b112206-4321-4ab1-a4d1-7e62cd911954] Running
	I0829 18:28:52.135208   31894 system_pods.go:61] "kindnet-7l5kn" [1a9ac71b-acaf-4ac9-b330-943525137d23] Running
	I0829 18:28:52.135211   31894 system_pods.go:61] "kindnet-kw2zk" [61a4cb33-47d5-4dd2-8711-d2524cf1133c] Running
	I0829 18:28:52.135214   31894 system_pods.go:61] "kindnet-m5jqn" [4df3ca7e-7d2e-414c-8d1f-77ac7ab484fb] Running
	I0829 18:28:52.135217   31894 system_pods.go:61] "kube-apiserver-ha-782425" [b51e7db3-35e5-4e46-aeb4-9e98bfecd2a3] Running
	I0829 18:28:52.135221   31894 system_pods.go:61] "kube-apiserver-ha-782425-m02" [c1faa8f8-b5fd-41e7-bee3-dcdd6f4f06cc] Running
	I0829 18:28:52.135224   31894 system_pods.go:61] "kube-apiserver-ha-782425-m03" [f20451ab-aa25-4414-afba-727618ae119b] Running
	I0829 18:28:52.135233   31894 system_pods.go:61] "kube-controller-manager-ha-782425" [008c32bf-b8f4-4cbe-a550-3820a3980f8f] Running
	I0829 18:28:52.135240   31894 system_pods.go:61] "kube-controller-manager-ha-782425-m02" [fcfc6d1d-ef6d-4b04-a86f-08d92de0883e] Running
	I0829 18:28:52.135243   31894 system_pods.go:61] "kube-controller-manager-ha-782425-m03" [38b82fbd-248d-4b1f-ae8a-284d2fb9cf0b] Running
	I0829 18:28:52.135245   31894 system_pods.go:61] "kube-proxy-5k8xr" [d07a092c-2a97-4bc5-ba9e-f0bf1022df8e] Running
	I0829 18:28:52.135248   31894 system_pods.go:61] "kube-proxy-d5kbx" [9033b7fd-0da5-4558-8c52-0ba06a7a4704] Running
	I0829 18:28:52.135251   31894 system_pods.go:61] "kube-proxy-vzss9" [de587dda-283e-4c9e-93e6-0e035656bf2b] Running
	I0829 18:28:52.135255   31894 system_pods.go:61] "kube-scheduler-ha-782425" [72ba768c-61dd-4c95-a640-cdc3782b6f4c] Running
	I0829 18:28:52.135258   31894 system_pods.go:61] "kube-scheduler-ha-782425-m02" [56fa0075-25e4-42b7-b7b1-1b6d55643fcd] Running
	I0829 18:28:52.135262   31894 system_pods.go:61] "kube-scheduler-ha-782425-m03" [7f68c7ca-ac7e-49ac-b0c7-e0a27c30349e] Running
	I0829 18:28:52.135265   31894 system_pods.go:61] "kube-vip-ha-782425" [83b3c3eb-b05b-47de-bc2a-ee1822b50b77] Running
	I0829 18:28:52.135270   31894 system_pods.go:61] "kube-vip-ha-782425-m02" [9655f7bc-ba21-4a7b-b223-18e52c655972] Running
	I0829 18:28:52.135272   31894 system_pods.go:61] "kube-vip-ha-782425-m03" [5472756b-a611-427c-9385-028188ba45de] Running
	I0829 18:28:52.135278   31894 system_pods.go:61] "storage-provisioner" [f41ebca1-035e-44b0-96a2-3aa1e794bc1f] Running
	I0829 18:28:52.135284   31894 system_pods.go:74] duration metric: took 188.537157ms to wait for pod list to return data ...
	I0829 18:28:52.135292   31894 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:28:52.322724   31894 request.go:632] Waited for 187.368856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/default/serviceaccounts
	I0829 18:28:52.322777   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/default/serviceaccounts
	I0829 18:28:52.322782   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:52.322790   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:52.322795   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:52.328425   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:28:52.328552   31894 default_sa.go:45] found service account: "default"
	I0829 18:28:52.328572   31894 default_sa.go:55] duration metric: took 193.269199ms for default service account to be created ...
	I0829 18:28:52.328581   31894 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:28:52.521761   31894 request.go:632] Waited for 193.120158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:28:52.521843   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/namespaces/kube-system/pods
	I0829 18:28:52.521850   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:52.521857   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:52.521864   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:52.527155   31894 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0829 18:28:52.536939   31894 system_pods.go:86] 24 kube-system pods found
	I0829 18:28:52.536977   31894 system_pods.go:89] "coredns-6f6b679f8f-nw2x2" [ab54ce43-4bd7-43ff-aad9-5cac2beb035b] Running
	I0829 18:28:52.536985   31894 system_pods.go:89] "coredns-6f6b679f8f-qhxm5" [286ec4e7-9401-4bdd-b8b2-86f00f130fc2] Running
	I0829 18:28:52.536991   31894 system_pods.go:89] "etcd-ha-782425" [743c3f2f-c86c-4f74-a7ef-9c95c0af0857] Running
	I0829 18:28:52.536997   31894 system_pods.go:89] "etcd-ha-782425-m02" [e70a5056-2675-48cf-8275-a630a1086c60] Running
	I0829 18:28:52.537003   31894 system_pods.go:89] "etcd-ha-782425-m03" [1b112206-4321-4ab1-a4d1-7e62cd911954] Running
	I0829 18:28:52.537009   31894 system_pods.go:89] "kindnet-7l5kn" [1a9ac71b-acaf-4ac9-b330-943525137d23] Running
	I0829 18:28:52.537014   31894 system_pods.go:89] "kindnet-kw2zk" [61a4cb33-47d5-4dd2-8711-d2524cf1133c] Running
	I0829 18:28:52.537019   31894 system_pods.go:89] "kindnet-m5jqn" [4df3ca7e-7d2e-414c-8d1f-77ac7ab484fb] Running
	I0829 18:28:52.537024   31894 system_pods.go:89] "kube-apiserver-ha-782425" [b51e7db3-35e5-4e46-aeb4-9e98bfecd2a3] Running
	I0829 18:28:52.537029   31894 system_pods.go:89] "kube-apiserver-ha-782425-m02" [c1faa8f8-b5fd-41e7-bee3-dcdd6f4f06cc] Running
	I0829 18:28:52.537035   31894 system_pods.go:89] "kube-apiserver-ha-782425-m03" [f20451ab-aa25-4414-afba-727618ae119b] Running
	I0829 18:28:52.537041   31894 system_pods.go:89] "kube-controller-manager-ha-782425" [008c32bf-b8f4-4cbe-a550-3820a3980f8f] Running
	I0829 18:28:52.537082   31894 system_pods.go:89] "kube-controller-manager-ha-782425-m02" [fcfc6d1d-ef6d-4b04-a86f-08d92de0883e] Running
	I0829 18:28:52.537094   31894 system_pods.go:89] "kube-controller-manager-ha-782425-m03" [38b82fbd-248d-4b1f-ae8a-284d2fb9cf0b] Running
	I0829 18:28:52.537100   31894 system_pods.go:89] "kube-proxy-5k8xr" [d07a092c-2a97-4bc5-ba9e-f0bf1022df8e] Running
	I0829 18:28:52.537106   31894 system_pods.go:89] "kube-proxy-d5kbx" [9033b7fd-0da5-4558-8c52-0ba06a7a4704] Running
	I0829 18:28:52.537116   31894 system_pods.go:89] "kube-proxy-vzss9" [de587dda-283e-4c9e-93e6-0e035656bf2b] Running
	I0829 18:28:52.537124   31894 system_pods.go:89] "kube-scheduler-ha-782425" [72ba768c-61dd-4c95-a640-cdc3782b6f4c] Running
	I0829 18:28:52.537133   31894 system_pods.go:89] "kube-scheduler-ha-782425-m02" [56fa0075-25e4-42b7-b7b1-1b6d55643fcd] Running
	I0829 18:28:52.537138   31894 system_pods.go:89] "kube-scheduler-ha-782425-m03" [7f68c7ca-ac7e-49ac-b0c7-e0a27c30349e] Running
	I0829 18:28:52.537147   31894 system_pods.go:89] "kube-vip-ha-782425" [83b3c3eb-b05b-47de-bc2a-ee1822b50b77] Running
	I0829 18:28:52.537155   31894 system_pods.go:89] "kube-vip-ha-782425-m02" [9655f7bc-ba21-4a7b-b223-18e52c655972] Running
	I0829 18:28:52.537160   31894 system_pods.go:89] "kube-vip-ha-782425-m03" [5472756b-a611-427c-9385-028188ba45de] Running
	I0829 18:28:52.537167   31894 system_pods.go:89] "storage-provisioner" [f41ebca1-035e-44b0-96a2-3aa1e794bc1f] Running
	I0829 18:28:52.537174   31894 system_pods.go:126] duration metric: took 208.587686ms to wait for k8s-apps to be running ...
	I0829 18:28:52.537185   31894 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:28:52.537239   31894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:28:52.552882   31894 system_svc.go:56] duration metric: took 15.686393ms WaitForService to wait for kubelet
	I0829 18:28:52.552921   31894 kubeadm.go:582] duration metric: took 22.61531535s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:28:52.552953   31894 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:28:52.722301   31894 request.go:632] Waited for 169.275812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.39:8443/api/v1/nodes
	I0829 18:28:52.722389   31894 round_trippers.go:463] GET https://192.168.39.39:8443/api/v1/nodes
	I0829 18:28:52.722400   31894 round_trippers.go:469] Request Headers:
	I0829 18:28:52.722410   31894 round_trippers.go:473]     Accept: application/json, */*
	I0829 18:28:52.722421   31894 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0829 18:28:52.726377   31894 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0829 18:28:52.727509   31894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:28:52.727542   31894 node_conditions.go:123] node cpu capacity is 2
	I0829 18:28:52.727575   31894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:28:52.727581   31894 node_conditions.go:123] node cpu capacity is 2
	I0829 18:28:52.727590   31894 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 18:28:52.727602   31894 node_conditions.go:123] node cpu capacity is 2
	I0829 18:28:52.727612   31894 node_conditions.go:105] duration metric: took 174.653119ms to run NodePressure ...
	I0829 18:28:52.727628   31894 start.go:241] waiting for startup goroutines ...
	I0829 18:28:52.727654   31894 start.go:255] writing updated cluster config ...
	I0829 18:28:52.728027   31894 ssh_runner.go:195] Run: rm -f paused
	I0829 18:28:52.780476   31894 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:28:52.782625   31894 out.go:177] * Done! kubectl is now configured to use "ha-782425" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.366876997Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58eef011-efd8-49f6-a8eb-dd68e72b3bb7 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.367614031Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0b5c0b2-445f-4cd9-a599-2d141cd396f5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.368295830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956411368268836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0b5c0b2-445f-4cd9-a599-2d141cd396f5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.369027747Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9699a9d-6f14-4079-9f4a-9d430678c7cf name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.369144369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9699a9d-6f14-4079-9f4a-9d430678c7cf name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.369463006Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956137320622555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84662d6e106199b21ed477f5a2886b295b043a6867485c365cfc10d478200160,PodSandboxId:8293780e1d6d4a1909809f02340a4b9cc62e32d7001d150d0addf9aeb78c49b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724955999524619318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999481874147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999444745885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4b
d7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724955987639286993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495598
4165837632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216684e1555951dcb1c3a39517bf4a8c25da68c22cb5dd013a12ce46d50ed3c4,PodSandboxId:6f11ab2a6fb7e7955643f60135f84a5af263d5fec7402aa76eb4fc4addc1adea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495597819
6173331,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785b0945a31435ed85f818ddb1964463,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724955973080680954,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724955973067436880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24877a3e0c79c9cda3862a8eec226d8ab981fed9522707a5e23bf3114832c434,PodSandboxId:633bf8a10344688b7780c2e84db6460da5bd182ad67296e33ac7186ef9c44dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724955973042670585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ef8a4b863ba396265906bfb135c4d78e0ec7bc4e4863880325735a054ce292,PodSandboxId:65d7a502881aee9e7eacf72e23843e0933e076edcb70634e71f902447d1d986b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724955972991457099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9699a9d-6f14-4079-9f4a-9d430678c7cf name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.384752774Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=cdcb1956-83a8-477a-af42-23142ed67f77 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.385316288Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-vwgrt,Uid:0e10fff1-6582-4f04-a07b-bd664457f72d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724956133992406681,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:28:53.671216294Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8293780e1d6d4a1909809f02340a4b9cc62e32d7001d150d0addf9aeb78c49b0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f41ebca1-035e-44b0-96a2-3aa1e794bc1f,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1724955999290499011,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-29T18:26:38.966327714Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-qhxm5,Uid:286ec4e7-9401-4bdd-b8b2-86f00f130fc2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724955999285240692,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:26:38.964016404Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-nw2x2,Uid:ab54ce43-4bd7-43ff-aad9-5cac2beb035b,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1724955999262511000,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:26:38.956750573Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&PodSandboxMetadata{Name:kube-proxy-d5kbx,Uid:9033b7fd-0da5-4558-8c52-0ba06a7a4704,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724955983791950452,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-08-29T18:26:23.473815262Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&PodSandboxMetadata{Name:kindnet-7l5kn,Uid:1a9ac71b-acaf-4ac9-b330-943525137d23,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724955983786040529,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:26:23.476929070Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6f11ab2a6fb7e7955643f60135f84a5af263d5fec7402aa76eb4fc4addc1adea,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-782425,Uid:785b0945a31435ed85f818ddb1964463,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1724955972844137443,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785b0945a31435ed85f818ddb1964463,},Annotations:map[string]string{kubernetes.io/config.hash: 785b0945a31435ed85f818ddb1964463,kubernetes.io/config.seen: 2024-08-29T18:26:11.771613724Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:65d7a502881aee9e7eacf72e23843e0933e076edcb70634e71f902447d1d986b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-782425,Uid:3c0cf445bd78f47e6e7fbbeb486ff4de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724955972842376030,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apis
erver.advertise-address.endpoint: 192.168.39.39:8443,kubernetes.io/config.hash: 3c0cf445bd78f47e6e7fbbeb486ff4de,kubernetes.io/config.seen: 2024-08-29T18:26:11.771615896Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-782425,Uid:551caf35234a7eb1c2260c492e064b1e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724955972837317476,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 551caf35234a7eb1c2260c492e064b1e,kubernetes.io/config.seen: 2024-08-29T18:26:11.771612254Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:633bf8a10344688b7780c2e84db6460da5bd182ad67296e33ac7186ef9c44dd9,Meta
data:&PodSandboxMetadata{Name:kube-controller-manager-ha-782425,Uid:cfcf75b2b14a72ac0b886c83206e03cf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724955972833904794,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cfcf75b2b14a72ac0b886c83206e03cf,kubernetes.io/config.seen: 2024-08-29T18:26:11.771607855Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&PodSandboxMetadata{Name:etcd-ha-782425,Uid:4edf4f911b63406e25f415895b8739c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724955972826695088,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-782425,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.39:2379,kubernetes.io/config.hash: 4edf4f911b63406e25f415895b8739c1,kubernetes.io/config.seen: 2024-08-29T18:26:11.771614859Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=cdcb1956-83a8-477a-af42-23142ed67f77 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.386095033Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edd9e913-4aea-463e-997a-217fbf9de8e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.386162952Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edd9e913-4aea-463e-997a-217fbf9de8e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.386405148Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956137320622555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84662d6e106199b21ed477f5a2886b295b043a6867485c365cfc10d478200160,PodSandboxId:8293780e1d6d4a1909809f02340a4b9cc62e32d7001d150d0addf9aeb78c49b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724955999524619318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999481874147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999444745885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4b
d7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724955987639286993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495598
4165837632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216684e1555951dcb1c3a39517bf4a8c25da68c22cb5dd013a12ce46d50ed3c4,PodSandboxId:6f11ab2a6fb7e7955643f60135f84a5af263d5fec7402aa76eb4fc4addc1adea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495597819
6173331,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785b0945a31435ed85f818ddb1964463,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724955973080680954,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724955973067436880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24877a3e0c79c9cda3862a8eec226d8ab981fed9522707a5e23bf3114832c434,PodSandboxId:633bf8a10344688b7780c2e84db6460da5bd182ad67296e33ac7186ef9c44dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724955973042670585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ef8a4b863ba396265906bfb135c4d78e0ec7bc4e4863880325735a054ce292,PodSandboxId:65d7a502881aee9e7eacf72e23843e0933e076edcb70634e71f902447d1d986b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724955972991457099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edd9e913-4aea-463e-997a-217fbf9de8e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.412910317Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff2f7390-f21b-4019-8aec-2a841ba421cb name=/runtime.v1.RuntimeService/Version
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.413021658Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff2f7390-f21b-4019-8aec-2a841ba421cb name=/runtime.v1.RuntimeService/Version
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.414008637Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a7ab8f8-5e5a-4c4c-8a38-f3aef4860a02 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.414566372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956411414543577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a7ab8f8-5e5a-4c4c-8a38-f3aef4860a02 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.415204693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6fbdaa2-79d6-4110-aa24-db480d0f073c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.415273689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6fbdaa2-79d6-4110-aa24-db480d0f073c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.415497271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956137320622555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84662d6e106199b21ed477f5a2886b295b043a6867485c365cfc10d478200160,PodSandboxId:8293780e1d6d4a1909809f02340a4b9cc62e32d7001d150d0addf9aeb78c49b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724955999524619318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999481874147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999444745885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4b
d7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724955987639286993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495598
4165837632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216684e1555951dcb1c3a39517bf4a8c25da68c22cb5dd013a12ce46d50ed3c4,PodSandboxId:6f11ab2a6fb7e7955643f60135f84a5af263d5fec7402aa76eb4fc4addc1adea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495597819
6173331,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785b0945a31435ed85f818ddb1964463,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724955973080680954,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724955973067436880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24877a3e0c79c9cda3862a8eec226d8ab981fed9522707a5e23bf3114832c434,PodSandboxId:633bf8a10344688b7780c2e84db6460da5bd182ad67296e33ac7186ef9c44dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724955973042670585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ef8a4b863ba396265906bfb135c4d78e0ec7bc4e4863880325735a054ce292,PodSandboxId:65d7a502881aee9e7eacf72e23843e0933e076edcb70634e71f902447d1d986b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724955972991457099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6fbdaa2-79d6-4110-aa24-db480d0f073c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.450619471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=af1d834d-29e9-43f0-8cb8-9b1c0177bafa name=/runtime.v1.RuntimeService/Version
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.450708199Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af1d834d-29e9-43f0-8cb8-9b1c0177bafa name=/runtime.v1.RuntimeService/Version
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.451885245Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7120a685-6298-484c-933b-b30a2f8f787f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.452373036Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956411452351532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7120a685-6298-484c-933b-b30a2f8f787f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.452837951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfc3458d-c507-46d2-b4ce-8ac70d24cc01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.452899703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfc3458d-c507-46d2-b4ce-8ac70d24cc01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:33:31 ha-782425 crio[671]: time="2024-08-29 18:33:31.453299338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956137320622555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84662d6e106199b21ed477f5a2886b295b043a6867485c365cfc10d478200160,PodSandboxId:8293780e1d6d4a1909809f02340a4b9cc62e32d7001d150d0addf9aeb78c49b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724955999524619318,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999481874147,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724955999444745885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4b
d7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724955987639286993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495598
4165837632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216684e1555951dcb1c3a39517bf4a8c25da68c22cb5dd013a12ce46d50ed3c4,PodSandboxId:6f11ab2a6fb7e7955643f60135f84a5af263d5fec7402aa76eb4fc4addc1adea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172495597819
6173331,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785b0945a31435ed85f818ddb1964463,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724955973080680954,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724955973067436880,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24877a3e0c79c9cda3862a8eec226d8ab981fed9522707a5e23bf3114832c434,PodSandboxId:633bf8a10344688b7780c2e84db6460da5bd182ad67296e33ac7186ef9c44dd9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724955973042670585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33ef8a4b863ba396265906bfb135c4d78e0ec7bc4e4863880325735a054ce292,PodSandboxId:65d7a502881aee9e7eacf72e23843e0933e076edcb70634e71f902447d1d986b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724955972991457099,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cfc3458d-c507-46d2-b4ce-8ac70d24cc01 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	37662e4a563b6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   3fd1be2d5c605       busybox-7dff88458-vwgrt
	84662d6e10619       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   8293780e1d6d4       storage-provisioner
	409d0bb5b6b40       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   21f825f2fab4d       coredns-6f6b679f8f-qhxm5
	4bd32029a6efc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   a3d59948e98ac       coredns-6f6b679f8f-nw2x2
	23aa351e7d2aa       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   a4dea5e1c4a59       kindnet-7l5kn
	2b337a7249ae2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   b589b425f1e05       kube-proxy-d5kbx
	216684e155595       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   6f11ab2a6fb7e       kube-vip-ha-782425
	5077da1dd8cc1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   6bd7384dc0e18       etcd-ha-782425
	a97655078532a       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   8f3aec69eb919       kube-scheduler-ha-782425
	24877a3e0c79c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   633bf8a103446       kube-controller-manager-ha-782425
	33ef8a4b863ba       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   65d7a502881ae       kube-apiserver-ha-782425
	
	
	==> coredns [409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902] <==
	[INFO] 10.244.2.2:57473 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000200833s
	[INFO] 10.244.2.2:52567 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010875539s
	[INFO] 10.244.2.2:49428 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000147198s
	[INFO] 10.244.1.2:41836 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001828177s
	[INFO] 10.244.1.2:37840 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088834s
	[INFO] 10.244.1.2:58950 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398175s
	[INFO] 10.244.1.2:44242 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081199s
	[INFO] 10.244.1.2:34411 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000240374s
	[INFO] 10.244.0.4:53126 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090758s
	[INFO] 10.244.0.4:52901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119888s
	[INFO] 10.244.0.4:37257 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017821s
	[INFO] 10.244.0.4:52278 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000240335s
	[INFO] 10.244.2.2:51997 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116371s
	[INFO] 10.244.2.2:50462 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000182689s
	[INFO] 10.244.1.2:35790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065854s
	[INFO] 10.244.0.4:56280 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165741s
	[INFO] 10.244.2.2:45436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113865s
	[INFO] 10.244.2.2:34308 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000419163s
	[INFO] 10.244.2.2:49859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112498s
	[INFO] 10.244.1.2:38106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212429s
	[INFO] 10.244.1.2:54743 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163094s
	[INFO] 10.244.1.2:54398 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014924s
	[INFO] 10.244.1.2:38833 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103377s
	[INFO] 10.244.0.4:55589 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206346s
	[INFO] 10.244.0.4:55224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098455s
	
	
	==> coredns [4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c] <==
	[INFO] 10.244.0.4:43640 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001476481s
	[INFO] 10.244.0.4:39791 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000051362s
	[INFO] 10.244.0.4:57306 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001427687s
	[INFO] 10.244.2.2:37045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125236s
	[INFO] 10.244.2.2:51775 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196255s
	[INFO] 10.244.2.2:37371 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123702s
	[INFO] 10.244.2.2:59027 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137207s
	[INFO] 10.244.1.2:42349 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121881s
	[INFO] 10.244.1.2:55845 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147799s
	[INFO] 10.244.1.2:50054 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077465s
	[INFO] 10.244.0.4:37394 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001939796s
	[INFO] 10.244.0.4:39167 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001349918s
	[INFO] 10.244.0.4:55247 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192001s
	[INFO] 10.244.0.4:50279 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056293s
	[INFO] 10.244.2.2:57566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010586s
	[INFO] 10.244.2.2:59408 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079146s
	[INFO] 10.244.1.2:58697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125072s
	[INFO] 10.244.1.2:39849 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011783s
	[INFO] 10.244.1.2:34464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086829s
	[INFO] 10.244.0.4:40575 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123993s
	[INFO] 10.244.0.4:53854 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077061s
	[INFO] 10.244.0.4:35333 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069139s
	[INFO] 10.244.2.2:47493 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133201s
	[INFO] 10.244.0.4:46944 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105838s
	[INFO] 10.244.0.4:56535 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148137s
	
	
	==> describe nodes <==
	Name:               ha-782425
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_26_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:26:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:33:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:29:25 +0000   Thu, 29 Aug 2024 18:26:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:29:25 +0000   Thu, 29 Aug 2024 18:26:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:29:25 +0000   Thu, 29 Aug 2024 18:26:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:29:25 +0000   Thu, 29 Aug 2024 18:26:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-782425
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 44ba55866afc4f4897f7d5cbfc46f2df
	  System UUID:                44ba5586-6afc-4f48-97f7-d5cbfc46f2df
	  Boot ID:                    e2df80f3-fc71-40f7-9f6a-86fc01e04fd1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vwgrt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 coredns-6f6b679f8f-nw2x2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m8s
	  kube-system                 coredns-6f6b679f8f-qhxm5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m8s
	  kube-system                 etcd-ha-782425                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m10s
	  kube-system                 kindnet-7l5kn                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m8s
	  kube-system                 kube-apiserver-ha-782425             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-controller-manager-ha-782425    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-proxy-d5kbx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 kube-scheduler-ha-782425             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-vip-ha-782425                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m7s   kube-proxy       
	  Normal  Starting                 7m10s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m10s  kubelet          Node ha-782425 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m10s  kubelet          Node ha-782425 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m10s  kubelet          Node ha-782425 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m9s   node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Normal  NodeReady                6m53s  kubelet          Node ha-782425 status is now: NodeReady
	  Normal  RegisteredNode           6m10s  node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Normal  RegisteredNode           4m56s  node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	
	
	Name:               ha-782425-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T18_27_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:27:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:30:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 29 Aug 2024 18:29:15 +0000   Thu, 29 Aug 2024 18:30:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 29 Aug 2024 18:29:15 +0000   Thu, 29 Aug 2024 18:30:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 29 Aug 2024 18:29:15 +0000   Thu, 29 Aug 2024 18:30:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 29 Aug 2024 18:29:15 +0000   Thu, 29 Aug 2024 18:30:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-782425-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a438bc2a769444e18345ad0f28ed5c33
	  System UUID:                a438bc2a-7694-44e1-8345-ad0f28ed5c33
	  Boot ID:                    75f0bd0d-e15b-47c8-9ca6-c5bb7d2e1afc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rsqqv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 etcd-ha-782425-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m17s
	  kube-system                 kindnet-kw2zk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m18s
	  kube-system                 kube-apiserver-ha-782425-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-controller-manager-ha-782425-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-5k8xr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-scheduler-ha-782425-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-vip-ha-782425-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m18s (x8 over 6m18s)  kubelet          Node ha-782425-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s (x8 over 6m18s)  kubelet          Node ha-782425-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s (x7 over 6m18s)  kubelet          Node ha-782425-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  NodeNotReady             2m44s                  node-controller  Node ha-782425-m02 status is now: NodeNotReady
	
	
	Name:               ha-782425-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T18_28_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:28:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:33:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:29:27 +0000   Thu, 29 Aug 2024 18:28:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:29:27 +0000   Thu, 29 Aug 2024 18:28:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:29:27 +0000   Thu, 29 Aug 2024 18:28:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:29:27 +0000   Thu, 29 Aug 2024 18:28:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-782425-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d557f0e4bd084f8d98554b9e0d482ef3
	  System UUID:                d557f0e4-bd08-4f8d-9855-4b9e0d482ef3
	  Boot ID:                    0b5a7eeb-45ed-43be-92d9-4127e0390a70
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-h8k94                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 etcd-ha-782425-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m3s
	  kube-system                 kindnet-m5jqn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m5s
	  kube-system                 kube-apiserver-ha-782425-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-controller-manager-ha-782425-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-proxy-vzss9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-scheduler-ha-782425-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-vip-ha-782425-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node ha-782425-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node ha-782425-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)  kubelet          Node ha-782425-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m4s                 node-controller  Node ha-782425-m03 event: Registered Node ha-782425-m03 in Controller
	  Normal  RegisteredNode           5m                   node-controller  Node ha-782425-m03 event: Registered Node ha-782425-m03 in Controller
	  Normal  RegisteredNode           4m57s                node-controller  Node ha-782425-m03 event: Registered Node ha-782425-m03 in Controller
	
	
	Name:               ha-782425-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T18_29_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:29:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:33:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:30:01 +0000   Thu, 29 Aug 2024 18:29:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:30:01 +0000   Thu, 29 Aug 2024 18:29:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:30:01 +0000   Thu, 29 Aug 2024 18:29:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:30:01 +0000   Thu, 29 Aug 2024 18:29:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    ha-782425-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1d73c2cadaf4d3cb7d9a4d8e585f4dc
	  System UUID:                a1d73c2c-adaf-4d3c-b7d9-a4d8e585f4dc
	  Boot ID:                    91ce67f2-8b0c-469f-94e5-0736e893ec4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-lbjt6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m1s
	  kube-system                 kube-proxy-5xgbn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m1s (x2 over 4m1s)  kubelet          Node ha-782425-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x2 over 4m1s)  kubelet          Node ha-782425-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x2 over 4m1s)  kubelet          Node ha-782425-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal  NodeReady                3m40s                kubelet          Node ha-782425-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug29 18:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050223] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037711] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.717447] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.881000] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.439989] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug29 18:26] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.056184] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054002] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.164673] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.149154] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.266975] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +3.780708] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +4.381995] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.060319] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.240176] kauditd_printk_skb: 74 callbacks suppressed
	[  +3.218514] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +2.447866] kauditd_printk_skb: 26 callbacks suppressed
	[ +15.454195] kauditd_printk_skb: 38 callbacks suppressed
	[Aug29 18:27] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240] <==
	{"level":"warn","ts":"2024-08-29T18:33:31.708983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.718004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.723529Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.726225Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.734051Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.740854Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.746384Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.754570Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.759988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.760932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.767280Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.770620Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.774019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.779949Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.781929Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.787692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.793580Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.797524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.800842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.804607Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.805986Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.806726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.812292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.822835Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-29T18:33:31.823562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38979a8318efbb8d","from":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:33:31 up 7 min,  0 users,  load average: 1.57, 0.74, 0.32
	Linux ha-782425 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c] <==
	I0829 18:32:58.604279       1 main.go:299] handling current node
	I0829 18:33:08.601142       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:33:08.601192       1 main.go:299] handling current node
	I0829 18:33:08.601211       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:33:08.601219       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:33:08.601404       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:33:08.601426       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:33:08.601498       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:33:08.601516       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:33:18.602870       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:33:18.602978       1 main.go:299] handling current node
	I0829 18:33:18.603007       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:33:18.603025       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:33:18.603163       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:33:18.603184       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:33:18.603244       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:33:18.603269       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:33:28.595441       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:33:28.595568       1 main.go:299] handling current node
	I0829 18:33:28.595614       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:33:28.595635       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:33:28.595851       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:33:28.595884       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:33:28.595961       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:33:28.595979       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [33ef8a4b863ba396265906bfb135c4d78e0ec7bc4e4863880325735a054ce292] <==
	W0829 18:26:17.714072       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.39]
	I0829 18:26:17.716025       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 18:26:17.743603       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0829 18:26:17.747325       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 18:26:21.823934       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 18:26:21.838572       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0829 18:26:21.851313       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 18:26:22.941214       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0829 18:26:23.439375       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0829 18:28:58.926967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55292: use of closed network connection
	E0829 18:28:59.113215       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55320: use of closed network connection
	E0829 18:28:59.292611       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55342: use of closed network connection
	E0829 18:28:59.473011       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55366: use of closed network connection
	E0829 18:28:59.661105       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55380: use of closed network connection
	E0829 18:28:59.845998       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55398: use of closed network connection
	E0829 18:29:00.022186       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55412: use of closed network connection
	E0829 18:29:00.189414       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55424: use of closed network connection
	E0829 18:29:00.362829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55438: use of closed network connection
	E0829 18:29:00.644380       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55458: use of closed network connection
	E0829 18:29:00.806979       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55478: use of closed network connection
	E0829 18:29:00.983208       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55498: use of closed network connection
	E0829 18:29:01.155072       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55510: use of closed network connection
	E0829 18:29:01.339608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55534: use of closed network connection
	E0829 18:29:01.514915       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55544: use of closed network connection
	W0829 18:30:27.718266       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.39]
	
	
	==> kube-controller-manager [24877a3e0c79c9cda3862a8eec226d8ab981fed9522707a5e23bf3114832c434] <==
	I0829 18:29:30.977193       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-782425-m04" podCIDRs=["10.244.3.0/24"]
	I0829 18:29:30.977270       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:30.977336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:30.977650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:31.197436       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:31.214845       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:31.572319       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:32.654224       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:32.655195       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-782425-m04"
	I0829 18:29:32.758004       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:35.155187       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:35.187216       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:41.068624       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:51.785588       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-782425-m04"
	I0829 18:29:51.786325       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:51.804165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:29:52.670397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:30:01.547137       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:30:47.695975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m02"
	I0829 18:30:47.696371       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-782425-m04"
	I0829 18:30:47.731104       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m02"
	I0829 18:30:47.835000       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.621605ms"
	I0829 18:30:47.835107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.342µs"
	I0829 18:30:50.263077       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m02"
	I0829 18:30:52.961688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m02"
	
	
	==> kube-proxy [2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:26:24.489515       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:26:24.508455       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.39"]
	E0829 18:26:24.508982       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:26:24.569427       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:26:24.569483       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:26:24.569507       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:26:24.571810       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:26:24.572218       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:26:24.572452       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:26:24.574533       1 config.go:197] "Starting service config controller"
	I0829 18:26:24.574604       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:26:24.574657       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:26:24.574676       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:26:24.577339       1 config.go:326] "Starting node config controller"
	I0829 18:26:24.577371       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:26:24.675657       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 18:26:24.675685       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:26:24.677430       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7] <==
	E0829 18:26:16.986990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:26:16.999915       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 18:26:16.999961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:26:17.315220       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:26:17.315325       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 18:26:20.267554       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 18:28:53.643358       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="6a403b21-4f43-4128-a1b9-b4d805e7d5b2" pod="default/busybox-7dff88458-rsqqv" assumedNode="ha-782425-m02" currentNode="ha-782425-m03"
	E0829 18:28:53.651947       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rsqqv\": pod busybox-7dff88458-rsqqv is already assigned to node \"ha-782425-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rsqqv" node="ha-782425-m03"
	E0829 18:28:53.652365       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6a403b21-4f43-4128-a1b9-b4d805e7d5b2(default/busybox-7dff88458-rsqqv) was assumed on ha-782425-m03 but assigned to ha-782425-m02" pod="default/busybox-7dff88458-rsqqv"
	E0829 18:28:53.652538       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rsqqv\": pod busybox-7dff88458-rsqqv is already assigned to node \"ha-782425-m02\"" pod="default/busybox-7dff88458-rsqqv"
	I0829 18:28:53.652740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rsqqv" node="ha-782425-m02"
	E0829 18:28:53.677627       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-h8k94\": pod busybox-7dff88458-h8k94 is already assigned to node \"ha-782425-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-h8k94" node="ha-782425-m03"
	E0829 18:28:53.677952       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-h8k94\": pod busybox-7dff88458-h8k94 is already assigned to node \"ha-782425-m03\"" pod="default/busybox-7dff88458-h8k94"
	E0829 18:28:53.695276       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vwgrt\": pod busybox-7dff88458-vwgrt is already assigned to node \"ha-782425\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vwgrt" node="ha-782425"
	E0829 18:28:53.695376       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0e10fff1-6582-4f04-a07b-bd664457f72d(default/busybox-7dff88458-vwgrt) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-vwgrt"
	E0829 18:28:53.695398       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vwgrt\": pod busybox-7dff88458-vwgrt is already assigned to node \"ha-782425\"" pod="default/busybox-7dff88458-vwgrt"
	I0829 18:28:53.695418       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-vwgrt" node="ha-782425"
	E0829 18:29:31.044983       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lbjt6\": pod kindnet-lbjt6 is already assigned to node \"ha-782425-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-lbjt6" node="ha-782425-m04"
	E0829 18:29:31.045106       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ee67d98e-b169-415c-ac85-e253e2888144(kube-system/kindnet-lbjt6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lbjt6"
	E0829 18:29:31.045132       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lbjt6\": pod kindnet-lbjt6 is already assigned to node \"ha-782425-m04\"" pod="kube-system/kindnet-lbjt6"
	I0829 18:29:31.045177       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lbjt6" node="ha-782425-m04"
	E0829 18:29:31.045921       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5xgbn\": pod kube-proxy-5xgbn is already assigned to node \"ha-782425-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5xgbn" node="ha-782425-m04"
	E0829 18:29:31.045987       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 278c58ce-3b1f-45c5-a1c9-0d2ce710f092(kube-system/kube-proxy-5xgbn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5xgbn"
	E0829 18:29:31.046008       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5xgbn\": pod kube-proxy-5xgbn is already assigned to node \"ha-782425-m04\"" pod="kube-system/kube-proxy-5xgbn"
	I0829 18:29:31.046027       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5xgbn" node="ha-782425-m04"
	
	
	==> kubelet <==
	Aug 29 18:32:21 ha-782425 kubelet[1321]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 18:32:21 ha-782425 kubelet[1321]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 18:32:21 ha-782425 kubelet[1321]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 18:32:21 ha-782425 kubelet[1321]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 18:32:21 ha-782425 kubelet[1321]: E0829 18:32:21.873973    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956341873360262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:21 ha-782425 kubelet[1321]: E0829 18:32:21.874024    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956341873360262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:31 ha-782425 kubelet[1321]: E0829 18:32:31.876827    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956351875981359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:31 ha-782425 kubelet[1321]: E0829 18:32:31.876867    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956351875981359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:41 ha-782425 kubelet[1321]: E0829 18:32:41.878346    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956361878049786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:41 ha-782425 kubelet[1321]: E0829 18:32:41.878380    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956361878049786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:51 ha-782425 kubelet[1321]: E0829 18:32:51.879912    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956371879450803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:32:51 ha-782425 kubelet[1321]: E0829 18:32:51.879940    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956371879450803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:33:01 ha-782425 kubelet[1321]: E0829 18:33:01.882451    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956381880900840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:33:01 ha-782425 kubelet[1321]: E0829 18:33:01.882907    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956381880900840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:33:11 ha-782425 kubelet[1321]: E0829 18:33:11.884541    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956391884194768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:33:11 ha-782425 kubelet[1321]: E0829 18:33:11.884570    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956391884194768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:33:21 ha-782425 kubelet[1321]: E0829 18:33:21.760614    1321 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 18:33:21 ha-782425 kubelet[1321]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 18:33:21 ha-782425 kubelet[1321]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 18:33:21 ha-782425 kubelet[1321]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 18:33:21 ha-782425 kubelet[1321]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 18:33:21 ha-782425 kubelet[1321]: E0829 18:33:21.888670    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956401887723512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:33:21 ha-782425 kubelet[1321]: E0829 18:33:21.888714    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956401887723512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:33:31 ha-782425 kubelet[1321]: E0829 18:33:31.891037    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956411890686253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:33:31 ha-782425 kubelet[1321]: E0829 18:33:31.891061    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956411890686253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-782425 -n ha-782425
helpers_test.go:261: (dbg) Run:  kubectl --context ha-782425 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (55.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-782425 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-782425 -v=7 --alsologtostderr
E0829 18:34:49.633004   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:35:17.335406   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-782425 -v=7 --alsologtostderr: exit status 82 (2m1.819033898s)

                                                
                                                
-- stdout --
	* Stopping node "ha-782425-m04"  ...
	* Stopping node "ha-782425-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:33:33.272030   37655 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:33:33.272129   37655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:33:33.272137   37655 out.go:358] Setting ErrFile to fd 2...
	I0829 18:33:33.272141   37655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:33:33.272357   37655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:33:33.272571   37655 out.go:352] Setting JSON to false
	I0829 18:33:33.272655   37655 mustload.go:65] Loading cluster: ha-782425
	I0829 18:33:33.272992   37655 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:33:33.273083   37655 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:33:33.273254   37655 mustload.go:65] Loading cluster: ha-782425
	I0829 18:33:33.273379   37655 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:33:33.273401   37655 stop.go:39] StopHost: ha-782425-m04
	I0829 18:33:33.273776   37655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:33.273815   37655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:33.290551   37655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0829 18:33:33.291062   37655 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:33.291784   37655 main.go:141] libmachine: Using API Version  1
	I0829 18:33:33.291806   37655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:33.292168   37655 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:33.294026   37655 out.go:177] * Stopping node "ha-782425-m04"  ...
	I0829 18:33:33.295586   37655 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 18:33:33.295635   37655 main.go:141] libmachine: (ha-782425-m04) Calling .DriverName
	I0829 18:33:33.295882   37655 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 18:33:33.295912   37655 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHHostname
	I0829 18:33:33.299115   37655 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:33.299518   37655 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:29:15 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:33:33.299548   37655 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:33:33.299734   37655 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHPort
	I0829 18:33:33.299924   37655 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHKeyPath
	I0829 18:33:33.300058   37655 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHUsername
	I0829 18:33:33.300230   37655 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m04/id_rsa Username:docker}
	I0829 18:33:33.387858   37655 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 18:33:33.440480   37655 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 18:33:33.493299   37655 main.go:141] libmachine: Stopping "ha-782425-m04"...
	I0829 18:33:33.493351   37655 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:33:33.494986   37655 main.go:141] libmachine: (ha-782425-m04) Calling .Stop
	I0829 18:33:33.498222   37655 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 0/120
	I0829 18:33:34.640008   37655 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:33:34.641605   37655 main.go:141] libmachine: Machine "ha-782425-m04" was stopped.
	I0829 18:33:34.641623   37655 stop.go:75] duration metric: took 1.346046025s to stop
	I0829 18:33:34.641654   37655 stop.go:39] StopHost: ha-782425-m03
	I0829 18:33:34.641988   37655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:33:34.642033   37655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:33:34.657408   37655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0829 18:33:34.657792   37655 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:33:34.658253   37655 main.go:141] libmachine: Using API Version  1
	I0829 18:33:34.658271   37655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:33:34.658594   37655 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:33:34.660672   37655 out.go:177] * Stopping node "ha-782425-m03"  ...
	I0829 18:33:34.661792   37655 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 18:33:34.661818   37655 main.go:141] libmachine: (ha-782425-m03) Calling .DriverName
	I0829 18:33:34.662053   37655 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 18:33:34.662073   37655 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHHostname
	I0829 18:33:34.664822   37655 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:34.665203   37655 main.go:141] libmachine: (ha-782425-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:78:f3", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:27:52 +0000 UTC Type:0 Mac:52:54:00:b5:78:f3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-782425-m03 Clientid:01:52:54:00:b5:78:f3}
	I0829 18:33:34.665227   37655 main.go:141] libmachine: (ha-782425-m03) DBG | domain ha-782425-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:b5:78:f3 in network mk-ha-782425
	I0829 18:33:34.665355   37655 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHPort
	I0829 18:33:34.665526   37655 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHKeyPath
	I0829 18:33:34.665666   37655 main.go:141] libmachine: (ha-782425-m03) Calling .GetSSHUsername
	I0829 18:33:34.665791   37655 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m03/id_rsa Username:docker}
	I0829 18:33:34.744776   37655 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 18:33:34.797157   37655 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 18:33:34.850394   37655 main.go:141] libmachine: Stopping "ha-782425-m03"...
	I0829 18:33:34.850435   37655 main.go:141] libmachine: (ha-782425-m03) Calling .GetState
	I0829 18:33:34.851930   37655 main.go:141] libmachine: (ha-782425-m03) Calling .Stop
	I0829 18:33:34.855237   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 0/120
	I0829 18:33:35.856903   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 1/120
	I0829 18:33:36.858313   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 2/120
	I0829 18:33:37.859907   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 3/120
	I0829 18:33:38.861475   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 4/120
	I0829 18:33:39.863556   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 5/120
	I0829 18:33:40.865003   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 6/120
	I0829 18:33:41.866477   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 7/120
	I0829 18:33:42.868065   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 8/120
	I0829 18:33:43.869594   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 9/120
	I0829 18:33:44.871485   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 10/120
	I0829 18:33:45.873059   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 11/120
	I0829 18:33:46.874758   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 12/120
	I0829 18:33:47.876368   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 13/120
	I0829 18:33:48.877934   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 14/120
	I0829 18:33:49.880106   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 15/120
	I0829 18:33:50.881842   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 16/120
	I0829 18:33:51.883451   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 17/120
	I0829 18:33:52.885052   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 18/120
	I0829 18:33:53.886333   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 19/120
	I0829 18:33:54.888201   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 20/120
	I0829 18:33:55.889703   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 21/120
	I0829 18:33:56.891076   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 22/120
	I0829 18:33:57.892773   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 23/120
	I0829 18:33:58.894611   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 24/120
	I0829 18:33:59.896351   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 25/120
	I0829 18:34:00.897896   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 26/120
	I0829 18:34:01.899645   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 27/120
	I0829 18:34:02.901179   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 28/120
	I0829 18:34:03.902442   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 29/120
	I0829 18:34:04.904504   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 30/120
	I0829 18:34:05.905842   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 31/120
	I0829 18:34:06.907685   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 32/120
	I0829 18:34:07.909404   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 33/120
	I0829 18:34:08.910766   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 34/120
	I0829 18:34:09.912578   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 35/120
	I0829 18:34:10.914240   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 36/120
	I0829 18:34:11.915832   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 37/120
	I0829 18:34:12.917166   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 38/120
	I0829 18:34:13.918677   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 39/120
	I0829 18:34:14.920373   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 40/120
	I0829 18:34:15.921679   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 41/120
	I0829 18:34:16.923039   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 42/120
	I0829 18:34:17.924291   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 43/120
	I0829 18:34:18.925492   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 44/120
	I0829 18:34:19.926587   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 45/120
	I0829 18:34:20.927961   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 46/120
	I0829 18:34:21.929412   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 47/120
	I0829 18:34:22.931092   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 48/120
	I0829 18:34:23.932419   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 49/120
	I0829 18:34:24.933933   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 50/120
	I0829 18:34:25.935128   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 51/120
	I0829 18:34:26.936800   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 52/120
	I0829 18:34:27.938346   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 53/120
	I0829 18:34:28.940466   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 54/120
	I0829 18:34:29.942358   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 55/120
	I0829 18:34:30.944692   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 56/120
	I0829 18:34:31.946292   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 57/120
	I0829 18:34:32.947900   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 58/120
	I0829 18:34:33.949368   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 59/120
	I0829 18:34:34.950990   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 60/120
	I0829 18:34:35.952445   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 61/120
	I0829 18:34:36.953977   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 62/120
	I0829 18:34:37.955480   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 63/120
	I0829 18:34:38.957036   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 64/120
	I0829 18:34:39.958866   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 65/120
	I0829 18:34:40.960131   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 66/120
	I0829 18:34:41.961510   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 67/120
	I0829 18:34:42.963287   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 68/120
	I0829 18:34:43.965065   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 69/120
	I0829 18:34:44.967498   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 70/120
	I0829 18:34:45.969061   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 71/120
	I0829 18:34:46.970376   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 72/120
	I0829 18:34:47.971956   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 73/120
	I0829 18:34:48.973455   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 74/120
	I0829 18:34:49.975280   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 75/120
	I0829 18:34:50.976847   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 76/120
	I0829 18:34:51.978548   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 77/120
	I0829 18:34:52.979797   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 78/120
	I0829 18:34:53.981318   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 79/120
	I0829 18:34:54.983208   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 80/120
	I0829 18:34:55.984705   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 81/120
	I0829 18:34:56.985942   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 82/120
	I0829 18:34:57.987217   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 83/120
	I0829 18:34:58.988559   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 84/120
	I0829 18:34:59.990453   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 85/120
	I0829 18:35:00.991703   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 86/120
	I0829 18:35:01.993140   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 87/120
	I0829 18:35:02.994450   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 88/120
	I0829 18:35:03.996026   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 89/120
	I0829 18:35:04.998162   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 90/120
	I0829 18:35:05.999348   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 91/120
	I0829 18:35:07.000742   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 92/120
	I0829 18:35:08.002173   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 93/120
	I0829 18:35:09.003472   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 94/120
	I0829 18:35:10.005016   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 95/120
	I0829 18:35:11.006350   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 96/120
	I0829 18:35:12.007513   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 97/120
	I0829 18:35:13.009130   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 98/120
	I0829 18:35:14.010556   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 99/120
	I0829 18:35:15.012293   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 100/120
	I0829 18:35:16.013741   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 101/120
	I0829 18:35:17.015569   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 102/120
	I0829 18:35:18.017422   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 103/120
	I0829 18:35:19.018812   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 104/120
	I0829 18:35:20.020863   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 105/120
	I0829 18:35:21.022161   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 106/120
	I0829 18:35:22.023475   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 107/120
	I0829 18:35:23.025787   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 108/120
	I0829 18:35:24.027010   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 109/120
	I0829 18:35:25.028413   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 110/120
	I0829 18:35:26.029816   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 111/120
	I0829 18:35:27.031136   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 112/120
	I0829 18:35:28.032558   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 113/120
	I0829 18:35:29.033899   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 114/120
	I0829 18:35:30.035230   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 115/120
	I0829 18:35:31.036555   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 116/120
	I0829 18:35:32.037985   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 117/120
	I0829 18:35:33.039364   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 118/120
	I0829 18:35:34.040822   37655 main.go:141] libmachine: (ha-782425-m03) Waiting for machine to stop 119/120
	I0829 18:35:35.041784   37655 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0829 18:35:35.041847   37655 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0829 18:35:35.043682   37655 out.go:201] 
	W0829 18:35:35.044989   37655 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0829 18:35:35.045006   37655 out.go:270] * 
	* 
	W0829 18:35:35.047307   37655 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 18:35:35.048573   37655 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-782425 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-782425 --wait=true -v=7 --alsologtostderr
E0829 18:38:26.706487   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-782425 --wait=true -v=7 --alsologtostderr: (4m0.603346052s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-782425
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-782425 -n ha-782425
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-782425 logs -n 25: (1.864990703s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m02:/home/docker/cp-test_ha-782425-m03_ha-782425-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m02 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m03_ha-782425-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04:/home/docker/cp-test_ha-782425-m03_ha-782425-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m04 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m03_ha-782425-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-782425 cp testdata/cp-test.txt                                                | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1158605446/001/cp-test_ha-782425-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425:/home/docker/cp-test_ha-782425-m04_ha-782425.txt                       |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425 sudo cat                                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m04_ha-782425.txt                                 |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m02:/home/docker/cp-test_ha-782425-m04_ha-782425-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m02 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m04_ha-782425-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03:/home/docker/cp-test_ha-782425-m04_ha-782425-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m03 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m04_ha-782425-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-782425 node stop m02 -v=7                                                     | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-782425 node start m02 -v=7                                                    | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-782425 -v=7                                                           | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-782425 -v=7                                                                | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-782425 --wait=true -v=7                                                    | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:35 UTC | 29 Aug 24 18:39 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-782425                                                                | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:39 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:35:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:35:35.094293   38130 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:35:35.094416   38130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:35:35.094428   38130 out.go:358] Setting ErrFile to fd 2...
	I0829 18:35:35.094435   38130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:35:35.094679   38130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:35:35.095349   38130 out.go:352] Setting JSON to false
	I0829 18:35:35.096524   38130 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4682,"bootTime":1724951853,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:35:35.096588   38130 start.go:139] virtualization: kvm guest
	I0829 18:35:35.098697   38130 out.go:177] * [ha-782425] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:35:35.100174   38130 notify.go:220] Checking for updates...
	I0829 18:35:35.100249   38130 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:35:35.101742   38130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:35:35.103064   38130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:35:35.104323   38130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:35:35.105553   38130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:35:35.106702   38130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:35:35.108193   38130 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:35:35.108300   38130 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:35:35.108913   38130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:35:35.108970   38130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:35:35.124238   38130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0829 18:35:35.124678   38130 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:35:35.125208   38130 main.go:141] libmachine: Using API Version  1
	I0829 18:35:35.125227   38130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:35:35.125527   38130 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:35:35.125694   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:35:35.160928   38130 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 18:35:35.162147   38130 start.go:297] selected driver: kvm2
	I0829 18:35:35.162163   38130 start.go:901] validating driver "kvm2" against &{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.235 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:35:35.162338   38130 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:35:35.162644   38130 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:35:35.162721   38130 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:35:35.177483   38130 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:35:35.178388   38130 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:35:35.178475   38130 cni.go:84] Creating CNI manager for ""
	I0829 18:35:35.178491   38130 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0829 18:35:35.178555   38130 start.go:340] cluster config:
	{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.235 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:35:35.178724   38130 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:35:35.180660   38130 out.go:177] * Starting "ha-782425" primary control-plane node in "ha-782425" cluster
	I0829 18:35:35.181854   38130 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:35:35.181887   38130 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:35:35.181894   38130 cache.go:56] Caching tarball of preloaded images
	I0829 18:35:35.181956   38130 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:35:35.181966   38130 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:35:35.182074   38130 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:35:35.182290   38130 start.go:360] acquireMachinesLock for ha-782425: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:35:35.182357   38130 start.go:364] duration metric: took 49.226µs to acquireMachinesLock for "ha-782425"
	I0829 18:35:35.182371   38130 start.go:96] Skipping create...Using existing machine configuration
	I0829 18:35:35.182376   38130 fix.go:54] fixHost starting: 
	I0829 18:35:35.182641   38130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:35:35.182670   38130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:35:35.197637   38130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38751
	I0829 18:35:35.198027   38130 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:35:35.198631   38130 main.go:141] libmachine: Using API Version  1
	I0829 18:35:35.198659   38130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:35:35.198997   38130 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:35:35.199234   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:35:35.199426   38130 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:35:35.200995   38130 fix.go:112] recreateIfNeeded on ha-782425: state=Running err=<nil>
	W0829 18:35:35.201014   38130 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 18:35:35.202798   38130 out.go:177] * Updating the running kvm2 "ha-782425" VM ...
	I0829 18:35:35.204027   38130 machine.go:93] provisionDockerMachine start ...
	I0829 18:35:35.204054   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:35:35.204238   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:35:35.206531   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.206918   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.206945   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.207060   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:35:35.207249   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.207392   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.207535   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:35:35.207740   38130 main.go:141] libmachine: Using SSH client type: native
	I0829 18:35:35.207926   38130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:35:35.207936   38130 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 18:35:35.318798   38130 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-782425
	
	I0829 18:35:35.318825   38130 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:35:35.319091   38130 buildroot.go:166] provisioning hostname "ha-782425"
	I0829 18:35:35.319114   38130 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:35:35.319296   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:35:35.321974   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.322391   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.322427   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.322522   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:35:35.322700   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.322867   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.323100   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:35:35.323286   38130 main.go:141] libmachine: Using SSH client type: native
	I0829 18:35:35.323472   38130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:35:35.323493   38130 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-782425 && echo "ha-782425" | sudo tee /etc/hostname
	I0829 18:35:35.448806   38130 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-782425
	
	I0829 18:35:35.448837   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:35:35.451650   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.452049   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.452075   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.452253   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:35:35.452447   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.452609   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.452727   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:35:35.452881   38130 main.go:141] libmachine: Using SSH client type: native
	I0829 18:35:35.453080   38130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:35:35.453099   38130 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-782425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-782425/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-782425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:35:35.566817   38130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:35:35.566843   38130 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:35:35.566874   38130 buildroot.go:174] setting up certificates
	I0829 18:35:35.566886   38130 provision.go:84] configureAuth start
	I0829 18:35:35.566902   38130 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:35:35.567150   38130 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:35:35.569710   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.570061   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.570102   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.570266   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:35:35.572471   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.572825   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.572853   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.572961   38130 provision.go:143] copyHostCerts
	I0829 18:35:35.572990   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:35:35.573027   38130 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 18:35:35.573043   38130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:35:35.573104   38130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:35:35.573186   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:35:35.573204   38130 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 18:35:35.573208   38130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:35:35.573230   38130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:35:35.573281   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:35:35.573299   38130 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 18:35:35.573302   38130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:35:35.573322   38130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:35:35.573382   38130 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.ha-782425 san=[127.0.0.1 192.168.39.39 ha-782425 localhost minikube]
	I0829 18:35:35.660260   38130 provision.go:177] copyRemoteCerts
	I0829 18:35:35.660322   38130 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:35:35.660343   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:35:35.662854   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.663213   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.663239   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.663424   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:35:35.663604   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.663746   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:35:35.663877   38130 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:35:35.748557   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 18:35:35.748632   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:35:35.774522   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 18:35:35.774604   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0829 18:35:35.802420   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 18:35:35.802488   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 18:35:35.827873   38130 provision.go:87] duration metric: took 260.972399ms to configureAuth
	I0829 18:35:35.827898   38130 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:35:35.828112   38130 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:35:35.828174   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:35:35.830937   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.831288   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.831326   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.831524   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:35:35.831721   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.831864   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.832001   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:35:35.832152   38130 main.go:141] libmachine: Using SSH client type: native
	I0829 18:35:35.832321   38130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:35:35.832354   38130 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:37:06.632618   38130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:37:06.632645   38130 machine.go:96] duration metric: took 1m31.428598655s to provisionDockerMachine
	I0829 18:37:06.632658   38130 start.go:293] postStartSetup for "ha-782425" (driver="kvm2")
	I0829 18:37:06.632670   38130 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:37:06.632685   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:37:06.632999   38130 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:37:06.633028   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:37:06.636076   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.636641   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:06.636663   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.636819   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:37:06.637070   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:37:06.637222   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:37:06.637387   38130 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:37:06.724847   38130 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:37:06.728820   38130 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:37:06.728845   38130 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 18:37:06.728907   38130 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 18:37:06.729018   38130 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 18:37:06.729032   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /etc/ssl/certs/202592.pem
	I0829 18:37:06.729144   38130 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 18:37:06.739337   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:37:06.762335   38130 start.go:296] duration metric: took 129.660855ms for postStartSetup
	I0829 18:37:06.762380   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:37:06.762707   38130 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0829 18:37:06.762732   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:37:06.765548   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.765926   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:06.765951   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.766157   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:37:06.766350   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:37:06.766509   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:37:06.766664   38130 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	W0829 18:37:06.847860   38130 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0829 18:37:06.847893   38130 fix.go:56] duration metric: took 1m31.665516475s for fixHost
	I0829 18:37:06.847919   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:37:06.850431   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.850823   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:06.850849   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.850959   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:37:06.851137   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:37:06.851248   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:37:06.851400   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:37:06.851568   38130 main.go:141] libmachine: Using SSH client type: native
	I0829 18:37:06.851787   38130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:37:06.851801   38130 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:37:06.962643   38130 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724956626.917922794
	
	I0829 18:37:06.962669   38130 fix.go:216] guest clock: 1724956626.917922794
	I0829 18:37:06.962681   38130 fix.go:229] Guest: 2024-08-29 18:37:06.917922794 +0000 UTC Remote: 2024-08-29 18:37:06.847901124 +0000 UTC m=+91.789559535 (delta=70.02167ms)
	I0829 18:37:06.962708   38130 fix.go:200] guest clock delta is within tolerance: 70.02167ms
	I0829 18:37:06.962718   38130 start.go:83] releasing machines lock for "ha-782425", held for 1m31.780350669s
	I0829 18:37:06.962748   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:37:06.963013   38130 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:37:06.965215   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.965584   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:06.965610   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.965803   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:37:06.966366   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:37:06.966537   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:37:06.966630   38130 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:37:06.966674   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:37:06.966734   38130 ssh_runner.go:195] Run: cat /version.json
	I0829 18:37:06.966756   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:37:06.969172   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.969204   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.969538   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:06.969561   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.969600   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:06.969620   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.969675   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:37:06.969859   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:37:06.969861   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:37:06.970044   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:37:06.970046   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:37:06.970230   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:37:06.970245   38130 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:37:06.970353   38130 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:37:07.109014   38130 ssh_runner.go:195] Run: systemctl --version
	I0829 18:37:07.115576   38130 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:37:07.274740   38130 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:37:07.283660   38130 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:37:07.283729   38130 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:37:07.293055   38130 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 18:37:07.293079   38130 start.go:495] detecting cgroup driver to use...
	I0829 18:37:07.293137   38130 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:37:07.309980   38130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:37:07.324647   38130 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:37:07.324737   38130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:37:07.338703   38130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:37:07.354049   38130 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:37:07.504773   38130 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:37:07.652012   38130 docker.go:233] disabling docker service ...
	I0829 18:37:07.652076   38130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:37:07.668406   38130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:37:07.681988   38130 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:37:07.827358   38130 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:37:07.970168   38130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:37:07.984429   38130 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:37:08.003178   38130 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:37:08.003247   38130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.014177   38130 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:37:08.014238   38130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.024932   38130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.036897   38130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.047166   38130 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:37:08.057641   38130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.068105   38130 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.081031   38130 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.091246   38130 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:37:08.100430   38130 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:37:08.109910   38130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:37:08.255675   38130 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:37:12.170058   38130 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.914338259s)
	I0829 18:37:12.170099   38130 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:37:12.170149   38130 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:37:12.174849   38130 start.go:563] Will wait 60s for crictl version
	I0829 18:37:12.174892   38130 ssh_runner.go:195] Run: which crictl
	I0829 18:37:12.178204   38130 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:37:12.214459   38130 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:37:12.214540   38130 ssh_runner.go:195] Run: crio --version
	I0829 18:37:12.242960   38130 ssh_runner.go:195] Run: crio --version
	I0829 18:37:12.271960   38130 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:37:12.273405   38130 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:37:12.275817   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:12.276142   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:12.276166   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:12.276386   38130 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:37:12.280745   38130 kubeadm.go:883] updating cluster {Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.235 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:37:12.280942   38130 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:37:12.281003   38130 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:37:12.321741   38130 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:37:12.321760   38130 crio.go:433] Images already preloaded, skipping extraction
	I0829 18:37:12.321800   38130 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:37:12.357183   38130 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:37:12.357200   38130 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:37:12.357208   38130 kubeadm.go:934] updating node { 192.168.39.39 8443 v1.31.0 crio true true} ...
	I0829 18:37:12.357335   38130 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-782425 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:37:12.357432   38130 ssh_runner.go:195] Run: crio config
	I0829 18:37:12.402538   38130 cni.go:84] Creating CNI manager for ""
	I0829 18:37:12.402564   38130 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0829 18:37:12.402595   38130 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:37:12.402627   38130 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.39 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-782425 NodeName:ha-782425 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:37:12.402779   38130 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-782425"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:37:12.402795   38130 kube-vip.go:115] generating kube-vip config ...
	I0829 18:37:12.402834   38130 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 18:37:12.414324   38130 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 18:37:12.414474   38130 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0829 18:37:12.414543   38130 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:37:12.423866   38130 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:37:12.423938   38130 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0829 18:37:12.433054   38130 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0829 18:37:12.450034   38130 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:37:12.466019   38130 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0829 18:37:12.481895   38130 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0829 18:37:12.500904   38130 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 18:37:12.504744   38130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:37:12.647294   38130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:37:12.661336   38130 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425 for IP: 192.168.39.39
	I0829 18:37:12.661359   38130 certs.go:194] generating shared ca certs ...
	I0829 18:37:12.661378   38130 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:37:12.661537   38130 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 18:37:12.661592   38130 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 18:37:12.661606   38130 certs.go:256] generating profile certs ...
	I0829 18:37:12.661702   38130 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key
	I0829 18:37:12.661736   38130 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.aa9a4721
	I0829 18:37:12.661763   38130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.aa9a4721 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.39 192.168.39.253 192.168.39.220 192.168.39.254]
	I0829 18:37:12.721553   38130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.aa9a4721 ...
	I0829 18:37:12.721584   38130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.aa9a4721: {Name:mkae0fb68c3921a8e6389bf55233edae9c484b55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:37:12.721767   38130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.aa9a4721 ...
	I0829 18:37:12.721783   38130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.aa9a4721: {Name:mkfc0e4e7d4b044277a1f2550ca717ba5e4c6653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:37:12.721874   38130 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.aa9a4721 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt
	I0829 18:37:12.722047   38130 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.aa9a4721 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key
	I0829 18:37:12.722216   38130 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key
	I0829 18:37:12.722235   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 18:37:12.722253   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 18:37:12.722273   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 18:37:12.722292   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 18:37:12.722311   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 18:37:12.722336   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 18:37:12.722360   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 18:37:12.722378   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 18:37:12.722445   38130 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 18:37:12.722495   38130 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 18:37:12.722510   38130 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:37:12.722542   38130 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:37:12.722577   38130 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:37:12.722624   38130 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 18:37:12.722692   38130 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:37:12.722735   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /usr/share/ca-certificates/202592.pem
	I0829 18:37:12.722760   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:37:12.722778   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem -> /usr/share/ca-certificates/20259.pem
	I0829 18:37:12.723322   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:37:12.747935   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:37:12.770946   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:37:12.793697   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:37:12.815754   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 18:37:12.837298   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:37:12.859152   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:37:12.882330   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:37:12.904265   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 18:37:12.926027   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:37:12.949376   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 18:37:12.971128   38130 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:37:12.987090   38130 ssh_runner.go:195] Run: openssl version
	I0829 18:37:12.992736   38130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 18:37:13.003053   38130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 18:37:13.007151   38130 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 18:37:13.007198   38130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 18:37:13.012497   38130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 18:37:13.021466   38130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:37:13.031542   38130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:37:13.035694   38130 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:37:13.035771   38130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:37:13.041212   38130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:37:13.050667   38130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 18:37:13.061349   38130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 18:37:13.065275   38130 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 18:37:13.065333   38130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 18:37:13.070551   38130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 18:37:13.096933   38130 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:37:13.114208   38130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 18:37:13.124544   38130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 18:37:13.131351   38130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 18:37:13.137142   38130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 18:37:13.145443   38130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 18:37:13.158345   38130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 18:37:13.183192   38130 kubeadm.go:392] StartCluster: {Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.235 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:37:13.183304   38130 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:37:13.183396   38130 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:37:13.385010   38130 cri.go:89] found id: "450cc9d333192a050ee909372d05ad41a7242c093e83aafcf4e11dc2de735d10"
	I0829 18:37:13.385040   38130 cri.go:89] found id: "767087c78fa49bd5c1e4737317c00b8963261061039db2412620080ab784d984"
	I0829 18:37:13.385046   38130 cri.go:89] found id: "d6702bcf56ba304efd93a1f2eaac34664bb61926ecb61581099b71b28ed8cc90"
	I0829 18:37:13.385050   38130 cri.go:89] found id: "519a79c3fb1fe04e97738d1eb203c5fd726d83556a4664704ac9fd4f716b0811"
	I0829 18:37:13.385054   38130 cri.go:89] found id: "409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902"
	I0829 18:37:13.385059   38130 cri.go:89] found id: "4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c"
	I0829 18:37:13.385062   38130 cri.go:89] found id: "23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c"
	I0829 18:37:13.385065   38130 cri.go:89] found id: "2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d"
	I0829 18:37:13.385067   38130 cri.go:89] found id: "216684e1555951dcb1c3a39517bf4a8c25da68c22cb5dd013a12ce46d50ed3c4"
	I0829 18:37:13.385072   38130 cri.go:89] found id: "5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240"
	I0829 18:37:13.385075   38130 cri.go:89] found id: "a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7"
	I0829 18:37:13.385091   38130 cri.go:89] found id: "24877a3e0c79c9cda3862a8eec226d8ab981fed9522707a5e23bf3114832c434"
	I0829 18:37:13.385095   38130 cri.go:89] found id: "33ef8a4b863ba396265906bfb135c4d78e0ec7bc4e4863880325735a054ce292"
	I0829 18:37:13.385101   38130 cri.go:89] found id: ""
	I0829 18:37:13.385149   38130 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.359622470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956776359590599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7aaa2474-2214-4290-87ab-d538d5431c66 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.360433030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06bb1248-8e13-4209-9c5f-d48be841d15c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.360499577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06bb1248-8e13-4209-9c5f-d48be841d15c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.361020305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61da642e1b6a90457768c8a2a29f25d6d784c179cce01ab22de265bc05135898,PodSandboxId:a25f91827c1298c776d43d452fbcc51e09c3c6e8437d813e27cfb8fcf0074ce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956677815596945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d16af179676dd4557955b012625899572775e6f6ac44735c76aeddd44d3fdf,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724956676755763508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402ee13d72501a06994d139fc6a83416288cf463f25d5e0754da33b8dc4858c,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724956673762499415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99640f096bde6522770020b2249c41e85682111e94d36b9cd851593863a8ef29,PodSandboxId:60e047cd78823a50ad609bf0e147de8e256b54ee232893b23e18a36b5601fe9e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724956645211094274,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b999853b85936b403e953d43f9f09979,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:078060aad94315c04fe29c791ec93d09a6348a7fe30a8bc10a303ac96c6b4b65,PodSandboxId:1b441512c4428695d913c9f4a6d0e4801ed79c1cf1f2d727c91fc539ed988656,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724956644147256399,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:858c007f01133cbf6f7e4611b5e18e7f05cbad7f18965bf933daf93cf588cb5a,PodSandboxId:312c52b155c2894017c4d59c923caa4eec4f963377bd5795dcddaacfe652acb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956644036103274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32483de13691d4ca5512c75cbdc46a6350bdaee89c5ce7d051aed08f2629a7d6,PodSandboxId:98c171db62011a31f1e5b96fdd7ac555f5fc1756d6f17abf00c7bdb02bbf77a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724956643955850912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90d03cc166362aceb73a093f198bb289fefbf7e462ddde7900a5a80ebce98ab,PodSandboxId:91f0d779161a7bd935d92644834529cce86bfbdcf46737c08dd1c7d3bfa4016e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956640906958569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f4da722ec6c4acff782a3123b188313d8f86294083e6a37da8d9c6b02b7d4a,PodSandboxId:b3d37d9275f30d0fd89144cac3db73cc309f7cc735e96b7e24d80d8f8889f814,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724956634268028784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aff49f13b2ff05527a7ee6f76f09d89d7783ddf729d5954c6a2f37f5fcbdd7,PodSandboxId:e87602d948838dec6486139be09ec85598676d7b714cb7b0f4315fcb42a24b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724956633504160407,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5fd38c54d3b71920e140657150875973b66dcab9493bd836fc64ab5cb4ebd6,PodSandboxId:15ef2a08cc118654fb489a2c672412210856b02d25a475acc47023b724dee08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724956633453755653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf3523
4a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f72905aa04f170f74e67c3788c00d12a114862e321da6fe526e6d46be461c5a,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724956633400000614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de
,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd6df1c8c5b3e8bf24de8ef655497a92c5bff062a43d01179a4d86f7a2347c8,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724956633255170031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206
e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724956137320713034,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f
72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999481973684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999444863415,Labels:map[string]string{io.kubernetes.containe
r.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724955987639328499,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724955984165847613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913f
c06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724955973084150044,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724955973067497563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06bb1248-8e13-4209-9c5f-d48be841d15c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.405246637Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6cf0646-0a7c-4a74-8a71-aa2cd98ace1f name=/runtime.v1.RuntimeService/Version
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.405327032Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6cf0646-0a7c-4a74-8a71-aa2cd98ace1f name=/runtime.v1.RuntimeService/Version
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.406633071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e559b8e7-8723-4063-9774-eab3796a29f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.407148298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956776407119515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e559b8e7-8723-4063-9774-eab3796a29f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.407673803Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d4e5a14-a6d7-4d23-ace2-d345473be778 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.407733125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d4e5a14-a6d7-4d23-ace2-d345473be778 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.408156065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61da642e1b6a90457768c8a2a29f25d6d784c179cce01ab22de265bc05135898,PodSandboxId:a25f91827c1298c776d43d452fbcc51e09c3c6e8437d813e27cfb8fcf0074ce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956677815596945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d16af179676dd4557955b012625899572775e6f6ac44735c76aeddd44d3fdf,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724956676755763508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402ee13d72501a06994d139fc6a83416288cf463f25d5e0754da33b8dc4858c,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724956673762499415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99640f096bde6522770020b2249c41e85682111e94d36b9cd851593863a8ef29,PodSandboxId:60e047cd78823a50ad609bf0e147de8e256b54ee232893b23e18a36b5601fe9e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724956645211094274,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b999853b85936b403e953d43f9f09979,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:078060aad94315c04fe29c791ec93d09a6348a7fe30a8bc10a303ac96c6b4b65,PodSandboxId:1b441512c4428695d913c9f4a6d0e4801ed79c1cf1f2d727c91fc539ed988656,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724956644147256399,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:858c007f01133cbf6f7e4611b5e18e7f05cbad7f18965bf933daf93cf588cb5a,PodSandboxId:312c52b155c2894017c4d59c923caa4eec4f963377bd5795dcddaacfe652acb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956644036103274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32483de13691d4ca5512c75cbdc46a6350bdaee89c5ce7d051aed08f2629a7d6,PodSandboxId:98c171db62011a31f1e5b96fdd7ac555f5fc1756d6f17abf00c7bdb02bbf77a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724956643955850912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90d03cc166362aceb73a093f198bb289fefbf7e462ddde7900a5a80ebce98ab,PodSandboxId:91f0d779161a7bd935d92644834529cce86bfbdcf46737c08dd1c7d3bfa4016e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956640906958569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f4da722ec6c4acff782a3123b188313d8f86294083e6a37da8d9c6b02b7d4a,PodSandboxId:b3d37d9275f30d0fd89144cac3db73cc309f7cc735e96b7e24d80d8f8889f814,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724956634268028784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aff49f13b2ff05527a7ee6f76f09d89d7783ddf729d5954c6a2f37f5fcbdd7,PodSandboxId:e87602d948838dec6486139be09ec85598676d7b714cb7b0f4315fcb42a24b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724956633504160407,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5fd38c54d3b71920e140657150875973b66dcab9493bd836fc64ab5cb4ebd6,PodSandboxId:15ef2a08cc118654fb489a2c672412210856b02d25a475acc47023b724dee08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724956633453755653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf3523
4a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f72905aa04f170f74e67c3788c00d12a114862e321da6fe526e6d46be461c5a,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724956633400000614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de
,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd6df1c8c5b3e8bf24de8ef655497a92c5bff062a43d01179a4d86f7a2347c8,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724956633255170031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206
e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724956137320713034,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f
72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999481973684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999444863415,Labels:map[string]string{io.kubernetes.containe
r.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724955987639328499,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724955984165847613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913f
c06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724955973084150044,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724955973067497563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d4e5a14-a6d7-4d23-ace2-d345473be778 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.456187739Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b42a2de1-6192-41c9-821f-e0971c290606 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.456269158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b42a2de1-6192-41c9-821f-e0971c290606 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.457553751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6525e26-f036-4a10-8781-41007a25c471 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.458207137Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956776458142336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6525e26-f036-4a10-8781-41007a25c471 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.458744299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab4e1d19-4713-4588-b826-eecf4b2c86af name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.458917351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab4e1d19-4713-4588-b826-eecf4b2c86af name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.459308262Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61da642e1b6a90457768c8a2a29f25d6d784c179cce01ab22de265bc05135898,PodSandboxId:a25f91827c1298c776d43d452fbcc51e09c3c6e8437d813e27cfb8fcf0074ce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956677815596945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d16af179676dd4557955b012625899572775e6f6ac44735c76aeddd44d3fdf,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724956676755763508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402ee13d72501a06994d139fc6a83416288cf463f25d5e0754da33b8dc4858c,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724956673762499415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99640f096bde6522770020b2249c41e85682111e94d36b9cd851593863a8ef29,PodSandboxId:60e047cd78823a50ad609bf0e147de8e256b54ee232893b23e18a36b5601fe9e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724956645211094274,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b999853b85936b403e953d43f9f09979,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:078060aad94315c04fe29c791ec93d09a6348a7fe30a8bc10a303ac96c6b4b65,PodSandboxId:1b441512c4428695d913c9f4a6d0e4801ed79c1cf1f2d727c91fc539ed988656,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724956644147256399,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:858c007f01133cbf6f7e4611b5e18e7f05cbad7f18965bf933daf93cf588cb5a,PodSandboxId:312c52b155c2894017c4d59c923caa4eec4f963377bd5795dcddaacfe652acb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956644036103274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32483de13691d4ca5512c75cbdc46a6350bdaee89c5ce7d051aed08f2629a7d6,PodSandboxId:98c171db62011a31f1e5b96fdd7ac555f5fc1756d6f17abf00c7bdb02bbf77a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724956643955850912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90d03cc166362aceb73a093f198bb289fefbf7e462ddde7900a5a80ebce98ab,PodSandboxId:91f0d779161a7bd935d92644834529cce86bfbdcf46737c08dd1c7d3bfa4016e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956640906958569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f4da722ec6c4acff782a3123b188313d8f86294083e6a37da8d9c6b02b7d4a,PodSandboxId:b3d37d9275f30d0fd89144cac3db73cc309f7cc735e96b7e24d80d8f8889f814,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724956634268028784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aff49f13b2ff05527a7ee6f76f09d89d7783ddf729d5954c6a2f37f5fcbdd7,PodSandboxId:e87602d948838dec6486139be09ec85598676d7b714cb7b0f4315fcb42a24b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724956633504160407,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5fd38c54d3b71920e140657150875973b66dcab9493bd836fc64ab5cb4ebd6,PodSandboxId:15ef2a08cc118654fb489a2c672412210856b02d25a475acc47023b724dee08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724956633453755653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf3523
4a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f72905aa04f170f74e67c3788c00d12a114862e321da6fe526e6d46be461c5a,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724956633400000614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de
,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd6df1c8c5b3e8bf24de8ef655497a92c5bff062a43d01179a4d86f7a2347c8,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724956633255170031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206
e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724956137320713034,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f
72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999481973684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999444863415,Labels:map[string]string{io.kubernetes.containe
r.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724955987639328499,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724955984165847613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913f
c06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724955973084150044,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724955973067497563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab4e1d19-4713-4588-b826-eecf4b2c86af name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.517279207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bdb235ae-fc14-4c7c-a919-2276b3a36429 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.517570811Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bdb235ae-fc14-4c7c-a919-2276b3a36429 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.519728375Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cb4af30-d1eb-46a9-8b4e-df57474e85b8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.520624570Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956776520585465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cb4af30-d1eb-46a9-8b4e-df57474e85b8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.521486848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b06cead6-9954-406e-9a3d-5916ef7179d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.521583835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b06cead6-9954-406e-9a3d-5916ef7179d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:39:36 ha-782425 crio[3720]: time="2024-08-29 18:39:36.523039974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61da642e1b6a90457768c8a2a29f25d6d784c179cce01ab22de265bc05135898,PodSandboxId:a25f91827c1298c776d43d452fbcc51e09c3c6e8437d813e27cfb8fcf0074ce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956677815596945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d16af179676dd4557955b012625899572775e6f6ac44735c76aeddd44d3fdf,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724956676755763508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402ee13d72501a06994d139fc6a83416288cf463f25d5e0754da33b8dc4858c,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724956673762499415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99640f096bde6522770020b2249c41e85682111e94d36b9cd851593863a8ef29,PodSandboxId:60e047cd78823a50ad609bf0e147de8e256b54ee232893b23e18a36b5601fe9e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724956645211094274,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b999853b85936b403e953d43f9f09979,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:078060aad94315c04fe29c791ec93d09a6348a7fe30a8bc10a303ac96c6b4b65,PodSandboxId:1b441512c4428695d913c9f4a6d0e4801ed79c1cf1f2d727c91fc539ed988656,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724956644147256399,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:858c007f01133cbf6f7e4611b5e18e7f05cbad7f18965bf933daf93cf588cb5a,PodSandboxId:312c52b155c2894017c4d59c923caa4eec4f963377bd5795dcddaacfe652acb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956644036103274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"pro
tocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32483de13691d4ca5512c75cbdc46a6350bdaee89c5ce7d051aed08f2629a7d6,PodSandboxId:98c171db62011a31f1e5b96fdd7ac555f5fc1756d6f17abf00c7bdb02bbf77a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724956643955850912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90d03cc166362aceb73a093f198bb289fefbf7e462ddde7900a5a80ebce98ab,PodSandboxId:91f0d779161a7bd935d92644834529cce86bfbdcf46737c08dd1c7d3bfa4016e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956640906958569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f4da722ec6c4acff782a3123b188313d8f86294083e6a37da8d9c6b02b7d4a,PodSandboxId:b3d37d9275f30d0fd89144cac3db73cc309f7cc735e96b7e24d80d8f8889f814,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724956634268028784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aff49f13b2ff05527a7ee6f76f09d89d7783ddf729d5954c6a2f37f5fcbdd7,PodSandboxId:e87602d948838dec6486139be09ec85598676d7b714cb7b0f4315fcb42a24b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724956633504160407,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5fd38c54d3b71920e140657150875973b66dcab9493bd836fc64ab5cb4ebd6,PodSandboxId:15ef2a08cc118654fb489a2c672412210856b02d25a475acc47023b724dee08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724956633453755653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf3523
4a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f72905aa04f170f74e67c3788c00d12a114862e321da6fe526e6d46be461c5a,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724956633400000614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de
,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd6df1c8c5b3e8bf24de8ef655497a92c5bff062a43d01179a4d86f7a2347c8,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724956633255170031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206
e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724956137320713034,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f
72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999481973684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999444863415,Labels:map[string]string{io.kubernetes.containe
r.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724955987639328499,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724955984165847613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913f
c06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724955973084150044,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f72
9c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724955973067497563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b06cead6-9954-406e-9a3d-5916ef7179d0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	61da642e1b6a9       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   a25f91827c129       busybox-7dff88458-vwgrt
	09d16af179676       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            3                   26532fd1cb3c0       kube-apiserver-ha-782425
	e402ee13d7250       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   2                   d49e72306e840       kube-controller-manager-ha-782425
	99640f096bde6       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   60e047cd78823       kube-vip-ha-782425
	078060aad9431       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   1b441512c4428       kindnet-7l5kn
	858c007f01133       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   312c52b155c28       coredns-6f6b679f8f-qhxm5
	32483de13691d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   98c171db62011       kube-proxy-d5kbx
	d90d03cc16636       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   91f0d779161a7       coredns-6f6b679f8f-nw2x2
	55f4da722ec6c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   b3d37d9275f30       etcd-ha-782425
	31aff49f13b2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       5                   e87602d948838       storage-provisioner
	3f5fd38c54d3b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   15ef2a08cc118       kube-scheduler-ha-782425
	8f72905aa04f1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   26532fd1cb3c0       kube-apiserver-ha-782425
	edd6df1c8c5b3       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   1                   d49e72306e840       kube-controller-manager-ha-782425
	37662e4a563b6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   3fd1be2d5c605       busybox-7dff88458-vwgrt
	409d0bb5b6b40       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   21f825f2fab4d       coredns-6f6b679f8f-qhxm5
	4bd32029a6efc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   a3d59948e98ac       coredns-6f6b679f8f-nw2x2
	23aa351e7d2aa       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    13 minutes ago       Exited              kindnet-cni               0                   a4dea5e1c4a59       kindnet-7l5kn
	2b337a7249ae2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Exited              kube-proxy                0                   b589b425f1e05       kube-proxy-d5kbx
	5077da1dd8cc1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   6bd7384dc0e18       etcd-ha-782425
	a97655078532a       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago       Exited              kube-scheduler            0                   8f3aec69eb919       kube-scheduler-ha-782425
	
	
	==> coredns [409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902] <==
	[INFO] 10.244.1.2:58950 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398175s
	[INFO] 10.244.1.2:44242 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081199s
	[INFO] 10.244.1.2:34411 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000240374s
	[INFO] 10.244.0.4:53126 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090758s
	[INFO] 10.244.0.4:52901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119888s
	[INFO] 10.244.0.4:37257 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017821s
	[INFO] 10.244.0.4:52278 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000240335s
	[INFO] 10.244.2.2:51997 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116371s
	[INFO] 10.244.2.2:50462 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000182689s
	[INFO] 10.244.1.2:35790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065854s
	[INFO] 10.244.0.4:56280 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165741s
	[INFO] 10.244.2.2:45436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113865s
	[INFO] 10.244.2.2:34308 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000419163s
	[INFO] 10.244.2.2:49859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112498s
	[INFO] 10.244.1.2:38106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212429s
	[INFO] 10.244.1.2:54743 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163094s
	[INFO] 10.244.1.2:54398 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014924s
	[INFO] 10.244.1.2:38833 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103377s
	[INFO] 10.244.0.4:55589 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206346s
	[INFO] 10.244.0.4:55224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098455s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c] <==
	[INFO] 10.244.2.2:37045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125236s
	[INFO] 10.244.2.2:51775 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196255s
	[INFO] 10.244.2.2:37371 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123702s
	[INFO] 10.244.2.2:59027 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137207s
	[INFO] 10.244.1.2:42349 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121881s
	[INFO] 10.244.1.2:55845 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147799s
	[INFO] 10.244.1.2:50054 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077465s
	[INFO] 10.244.0.4:37394 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001939796s
	[INFO] 10.244.0.4:39167 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001349918s
	[INFO] 10.244.0.4:55247 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192001s
	[INFO] 10.244.0.4:50279 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056293s
	[INFO] 10.244.2.2:57566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010586s
	[INFO] 10.244.2.2:59408 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079146s
	[INFO] 10.244.1.2:58697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125072s
	[INFO] 10.244.1.2:39849 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011783s
	[INFO] 10.244.1.2:34464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086829s
	[INFO] 10.244.0.4:40575 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123993s
	[INFO] 10.244.0.4:53854 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077061s
	[INFO] 10.244.0.4:35333 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069139s
	[INFO] 10.244.2.2:47493 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133201s
	[INFO] 10.244.0.4:46944 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105838s
	[INFO] 10.244.0.4:56535 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148137s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1817&timeout=8m30s&timeoutSeconds=510&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [858c007f01133cbf6f7e4611b5e18e7f05cbad7f18965bf933daf93cf588cb5a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:51086->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1309394999]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Aug-2024 18:37:24.235) (total time: 12085ms):
	Trace[1309394999]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:51086->10.96.0.1:443: read: connection reset by peer 12084ms (18:37:36.320)
	Trace[1309394999]: [12.085537139s] [12.085537139s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:51086->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d90d03cc166362aceb73a093f198bb289fefbf7e462ddde7900a5a80ebce98ab] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47046->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1152282068]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Aug-2024 18:37:25.417) (total time: 10904ms):
	Trace[1152282068]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47046->10.96.0.1:443: read: connection reset by peer 10904ms (18:37:36.321)
	Trace[1152282068]: [10.904550935s] [10.904550935s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47046->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-782425
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_26_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:26:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:39:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:38:12 +0000   Thu, 29 Aug 2024 18:26:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:38:12 +0000   Thu, 29 Aug 2024 18:26:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:38:12 +0000   Thu, 29 Aug 2024 18:26:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:38:12 +0000   Thu, 29 Aug 2024 18:26:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-782425
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 44ba55866afc4f4897f7d5cbfc46f2df
	  System UUID:                44ba5586-6afc-4f48-97f7-d5cbfc46f2df
	  Boot ID:                    e2df80f3-fc71-40f7-9f6a-86fc01e04fd1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vwgrt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-nw2x2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-qhxm5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-782425                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-7l5kn                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-782425             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-782425    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-d5kbx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-782425             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-782425                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 89s                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-782425 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-782425 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-782425 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-782425 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Warning  ContainerGCFailed        3m15s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m38s (x3 over 3m27s)  kubelet          Node ha-782425 status is now: NodeNotReady
	  Normal   RegisteredNode           98s                    node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Normal   RegisteredNode           94s                    node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Normal   RegisteredNode           37s                    node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	
	
	Name:               ha-782425-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T18_27_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:27:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:39:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:38:45 +0000   Thu, 29 Aug 2024 18:38:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:38:45 +0000   Thu, 29 Aug 2024 18:38:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:38:45 +0000   Thu, 29 Aug 2024 18:38:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:38:45 +0000   Thu, 29 Aug 2024 18:38:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-782425-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a438bc2a769444e18345ad0f28ed5c33
	  System UUID:                a438bc2a-7694-44e1-8345-ad0f28ed5c33
	  Boot ID:                    f25faddc-b228-438e-8537-3bf15302de5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rsqqv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-782425-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-kw2zk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-782425-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-782425-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-5k8xr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-782425-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-782425-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-782425-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-782425-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-782425-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  NodeNotReady             8m50s                node-controller  Node ha-782425-m02 status is now: NodeNotReady
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node ha-782425-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node ha-782425-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x7 over 2m1s)  kubelet          Node ha-782425-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  RegisteredNode           95s                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  RegisteredNode           38s                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	
	
	Name:               ha-782425-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T18_28_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:28:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:39:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:39:12 +0000   Thu, 29 Aug 2024 18:38:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:39:12 +0000   Thu, 29 Aug 2024 18:38:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:39:12 +0000   Thu, 29 Aug 2024 18:38:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:39:12 +0000   Thu, 29 Aug 2024 18:38:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-782425-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d557f0e4bd084f8d98554b9e0d482ef3
	  System UUID:                d557f0e4-bd08-4f8d-9855-4b9e0d482ef3
	  Boot ID:                    3a989eae-73dd-4e23-8099-366896882511
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-h8k94                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-782425-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-m5jqn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-782425-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-782425-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-vzss9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-782425-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-782425-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 35s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-782425-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-782425-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-782425-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-782425-m03 event: Registered Node ha-782425-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-782425-m03 event: Registered Node ha-782425-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-782425-m03 event: Registered Node ha-782425-m03 in Controller
	  Normal   RegisteredNode           99s                node-controller  Node ha-782425-m03 event: Registered Node ha-782425-m03 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-782425-m03 event: Registered Node ha-782425-m03 in Controller
	  Normal   NodeNotReady             59s                node-controller  Node ha-782425-m03 status is now: NodeNotReady
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 56s                kubelet          Node ha-782425-m03 has been rebooted, boot id: 3a989eae-73dd-4e23-8099-366896882511
	  Normal   NodeHasSufficientMemory  56s (x2 over 56s)  kubelet          Node ha-782425-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x2 over 56s)  kubelet          Node ha-782425-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x2 over 56s)  kubelet          Node ha-782425-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                56s                kubelet          Node ha-782425-m03 status is now: NodeReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-782425-m03 event: Registered Node ha-782425-m03 in Controller
	
	
	Name:               ha-782425-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T18_29_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:29:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:39:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:39:29 +0000   Thu, 29 Aug 2024 18:39:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:39:29 +0000   Thu, 29 Aug 2024 18:39:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:39:29 +0000   Thu, 29 Aug 2024 18:39:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:39:29 +0000   Thu, 29 Aug 2024 18:39:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    ha-782425-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1d73c2cadaf4d3cb7d9a4d8e585f4dc
	  System UUID:                a1d73c2c-adaf-4d3c-b7d9-a4d8e585f4dc
	  Boot ID:                    e6b4e447-7857-4236-85e1-a47f00bda6d5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-lbjt6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-5xgbn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-782425-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-782425-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-782425-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal   NodeReady                9m46s              kubelet          Node ha-782425-m04 status is now: NodeReady
	  Normal   RegisteredNode           99s                node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal   NodeNotReady             59s                node-controller  Node ha-782425-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-782425-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-782425-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-782425-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-782425-m04 has been rebooted, boot id: e6b4e447-7857-4236-85e1-a47f00bda6d5
	  Normal   NodeReady                8s (x2 over 8s)    kubelet          Node ha-782425-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.056184] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054002] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.164673] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.149154] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.266975] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +3.780708] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +4.381995] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.060319] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.240176] kauditd_printk_skb: 74 callbacks suppressed
	[  +3.218514] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +2.447866] kauditd_printk_skb: 26 callbacks suppressed
	[ +15.454195] kauditd_printk_skb: 38 callbacks suppressed
	[Aug29 18:27] kauditd_printk_skb: 24 callbacks suppressed
	[Aug29 18:34] kauditd_printk_skb: 1 callbacks suppressed
	[Aug29 18:37] systemd-fstab-generator[3643]: Ignoring "noauto" option for root device
	[  +0.157201] systemd-fstab-generator[3655]: Ignoring "noauto" option for root device
	[  +0.171951] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.141238] systemd-fstab-generator[3682]: Ignoring "noauto" option for root device
	[  +0.279432] systemd-fstab-generator[3710]: Ignoring "noauto" option for root device
	[  +4.392282] systemd-fstab-generator[3808]: Ignoring "noauto" option for root device
	[  +0.089317] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.574001] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.706987] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.418678] kauditd_printk_skb: 30 callbacks suppressed
	[Aug29 18:38] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240] <==
	2024/08/29 18:35:35 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-29T18:35:36.135918Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":13514618170561217261,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-29T18:35:36.248254Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.39:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T18:35:36.248443Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.39:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-29T18:35:36.248587Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"38979a8318efbb8d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-29T18:35:36.248845Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.248897Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.248921Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.248956Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.249015Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.249071Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.249083Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.249089Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.249097Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.249129Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.249164Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.249210Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.249237Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.249260Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.252063Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.39:2380"}
	{"level":"warn","ts":"2024-08-29T18:35:36.252064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.120043688s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-29T18:35:36.252203Z","caller":"traceutil/trace.go:171","msg":"trace[1221347184] range","detail":"{range_begin:; range_end:; }","duration":"9.120196754s","start":"2024-08-29T18:35:27.131997Z","end":"2024-08-29T18:35:36.252194Z","steps":["trace[1221347184] 'agreement among raft nodes before linearized reading'  (duration: 9.120041599s)"],"step_count":1}
	{"level":"error","ts":"2024-08-29T18:35:36.252247Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-29T18:35:36.252359Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.39:2380"}
	{"level":"info","ts":"2024-08-29T18:35:36.252388Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-782425","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.39:2380"],"advertise-client-urls":["https://192.168.39.39:2379"]}
	
	
	==> etcd [55f4da722ec6c4acff782a3123b188313d8f86294083e6a37da8d9c6b02b7d4a] <==
	{"level":"warn","ts":"2024-08-29T18:38:37.727875Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1ede913032f684f1","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T18:38:39.830526Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1ede913032f684f1","rtt":"0s","error":"dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T18:38:39.830873Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1ede913032f684f1","rtt":"0s","error":"dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T18:38:41.729571Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.220:2380/version","remote-member-id":"1ede913032f684f1","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T18:38:41.729724Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1ede913032f684f1","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T18:38:44.830728Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1ede913032f684f1","rtt":"0s","error":"dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T18:38:44.830935Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1ede913032f684f1","rtt":"0s","error":"dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-29T18:38:45.219058Z","caller":"traceutil/trace.go:171","msg":"trace[1088395682] linearizableReadLoop","detail":"{readStateIndex:2714; appliedIndex:2714; }","duration":"134.033757ms","start":"2024-08-29T18:38:45.084939Z","end":"2024-08-29T18:38:45.218973Z","steps":["trace[1088395682] 'read index received'  (duration: 134.028723ms)","trace[1088395682] 'applied index is now lower than readState.Index'  (duration: 4.146µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T18:38:45.222663Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.655761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-782425-m03\" ","response":"range_response_count:1 size:5950"}
	{"level":"info","ts":"2024-08-29T18:38:45.222857Z","caller":"traceutil/trace.go:171","msg":"trace[1638186443] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-782425-m03; range_end:; response_count:1; response_revision:2335; }","duration":"137.929109ms","start":"2024-08-29T18:38:45.084909Z","end":"2024-08-29T18:38:45.222838Z","steps":["trace[1638186443] 'agreement among raft nodes before linearized reading'  (duration: 134.335372ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:38:45.223117Z","caller":"traceutil/trace.go:171","msg":"trace[1244495817] transaction","detail":"{read_only:false; response_revision:2336; number_of_response:1; }","duration":"150.611047ms","start":"2024-08-29T18:38:45.072494Z","end":"2024-08-29T18:38:45.223105Z","steps":["trace[1244495817] 'process raft request'  (duration: 146.056132ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:38:45.731003Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.220:2380/version","remote-member-id":"1ede913032f684f1","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T18:38:45.731051Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1ede913032f684f1","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T18:38:49.733052Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.220:2380/version","remote-member-id":"1ede913032f684f1","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T18:38:49.733114Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1ede913032f684f1","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T18:38:49.831097Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1ede913032f684f1","rtt":"0s","error":"dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-29T18:38:49.831172Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1ede913032f684f1","rtt":"0s","error":"dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-29T18:38:53.604312Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:38:53.604436Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:38:53.609055Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:38:53.617923Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"38979a8318efbb8d","to":"1ede913032f684f1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-29T18:38:53.617972Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:38:53.622877Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"38979a8318efbb8d","to":"1ede913032f684f1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-29T18:38:53.623048Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:39:04.962210Z","caller":"traceutil/trace.go:171","msg":"trace[357842179] transaction","detail":"{read_only:false; response_revision:2419; number_of_response:1; }","duration":"108.88505ms","start":"2024-08-29T18:39:04.853296Z","end":"2024-08-29T18:39:04.962181Z","steps":["trace[357842179] 'process raft request'  (duration: 108.753483ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:39:37 up 13 min,  0 users,  load average: 0.32, 0.58, 0.41
	Linux ha-782425 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [078060aad94315c04fe29c791ec93d09a6348a7fe30a8bc10a303ac96c6b4b65] <==
	I0829 18:39:05.199076       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:39:15.194226       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:39:15.194372       1 main.go:299] handling current node
	I0829 18:39:15.194416       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:39:15.194441       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:39:15.194620       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:39:15.194659       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:39:15.194747       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:39:15.194767       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:39:25.194683       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:39:25.194859       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:39:25.195037       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:39:25.195065       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:39:25.195169       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:39:25.195197       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:39:25.195274       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:39:25.195306       1 main.go:299] handling current node
	I0829 18:39:35.198629       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:39:35.198702       1 main.go:299] handling current node
	I0829 18:39:35.198730       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:39:35.198741       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:39:35.199037       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:39:35.199065       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:39:35.199157       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:39:35.199415       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c] <==
	I0829 18:34:58.595206       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:35:08.603255       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:35:08.603312       1 main.go:299] handling current node
	I0829 18:35:08.603330       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:35:08.603338       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:35:08.603500       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:35:08.603517       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:35:08.603576       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:35:08.603591       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:35:18.597896       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:35:18.598049       1 main.go:299] handling current node
	I0829 18:35:18.598079       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:35:18.598097       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:35:18.598272       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:35:18.598295       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:35:18.598381       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:35:18.598400       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:35:28.594737       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:35:28.594835       1 main.go:299] handling current node
	I0829 18:35:28.594867       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:35:28.594875       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:35:28.595046       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:35:28.595066       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:35:28.595127       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:35:28.595144       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [09d16af179676dd4557955b012625899572775e6f6ac44735c76aeddd44d3fdf] <==
	I0829 18:37:58.738228       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0829 18:37:58.795322       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0829 18:37:58.808819       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 18:37:58.808851       1 policy_source.go:224] refreshing policies
	I0829 18:37:58.831723       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 18:37:58.831757       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 18:37:58.831816       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 18:37:58.831923       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0829 18:37:58.832189       1 shared_informer.go:320] Caches are synced for configmaps
	I0829 18:37:58.832602       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0829 18:37:58.833560       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0829 18:37:58.837042       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0829 18:37:58.839205       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0829 18:37:58.839303       1 aggregator.go:171] initial CRD sync complete...
	I0829 18:37:58.839338       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 18:37:58.839361       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 18:37:58.839383       1 cache.go:39] Caches are synced for autoregister controller
	W0829 18:37:58.843242       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.253]
	I0829 18:37:58.844720       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 18:37:58.851397       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0829 18:37:58.854296       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0829 18:37:58.892752       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0829 18:37:59.738359       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0829 18:38:00.068401       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.253 192.168.39.39]
	W0829 18:38:10.213619       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.253 192.168.39.39]
	
	
	==> kube-apiserver [8f72905aa04f170f74e67c3788c00d12a114862e321da6fe526e6d46be461c5a] <==
	I0829 18:37:13.693911       1 options.go:228] external host was not specified, using 192.168.39.39
	I0829 18:37:13.695722       1 server.go:142] Version: v1.31.0
	I0829 18:37:13.695812       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0829 18:37:14.150493       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:37:14.151215       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0829 18:37:14.152737       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0829 18:37:14.164160       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0829 18:37:14.164200       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0829 18:37:14.164454       1 instance.go:232] Using reconciler: lease
	I0829 18:37:14.164866       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0829 18:37:14.165669       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:37:34.150047       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0829 18:37:34.151199       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0829 18:37:34.165972       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [e402ee13d72501a06994d139fc6a83416288cf463f25d5e0754da33b8dc4858c] <==
	I0829 18:38:19.341220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="29.230824ms"
	I0829 18:38:19.341356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-lqvlp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-lqvlp\": the object has been modified; please apply your changes to the latest version and try again"
	I0829 18:38:19.343500       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"de1b71e7-060b-4cf0-a7d4-c646abcc4be1", APIVersion:"v1", ResourceVersion:"262", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-lqvlp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-lqvlp": the object has been modified; please apply your changes to the latest version and try again
	I0829 18:38:19.344652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="43.459µs"
	I0829 18:38:38.095817       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:38:38.096070       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m03"
	I0829 18:38:38.117732       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:38:38.132941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m03"
	I0829 18:38:38.274847       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.12064ms"
	I0829 18:38:38.275020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.919µs"
	I0829 18:38:41.686322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m03"
	I0829 18:38:41.705891       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m03"
	I0829 18:38:42.459703       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:38:42.648912       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="160.316µs"
	I0829 18:38:43.461819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:38:45.226383       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m02"
	I0829 18:38:59.926502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:38:59.980235       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:39:00.046950       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.654476ms"
	I0829 18:39:00.048041       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="175.856µs"
	I0829 18:39:12.559142       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m03"
	I0829 18:39:29.093010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:39:29.093195       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-782425-m04"
	I0829 18:39:29.114226       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:39:29.946732       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	
	
	==> kube-controller-manager [edd6df1c8c5b3e8bf24de8ef655497a92c5bff062a43d01179a4d86f7a2347c8] <==
	I0829 18:37:14.441942       1 serving.go:386] Generated self-signed cert in-memory
	I0829 18:37:14.777199       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0829 18:37:14.777300       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:37:14.779105       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0829 18:37:14.779924       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0829 18:37:14.780124       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 18:37:14.780243       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0829 18:37:35.171019       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.39:8443/healthz\": dial tcp 192.168.39.39:8443: connect: connection refused"
	
	
	==> kube-proxy [2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d] <==
	E0829 18:34:26.948722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:30.016332       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:30.016449       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:30.016595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:30.016633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:30.016718       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:30.016753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:36.161666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:36.161765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:36.161956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:36.162033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:36.162165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:36.162208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:45.376924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:45.377566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:48.448504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:48.449287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:48.449767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:48.449894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:35:06.881752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:35:06.882004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:35:06.882245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:35:06.882339       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:35:09.953138       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:35:09.953197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [32483de13691d4ca5512c75cbdc46a6350bdaee89c5ce7d051aed08f2629a7d6] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:37:25.121710       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-782425\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0829 18:37:28.193043       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-782425\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0829 18:37:31.265734       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-782425\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0829 18:37:37.408968       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-782425\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0829 18:37:49.697739       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-782425\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0829 18:38:07.368559       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.39"]
	E0829 18:38:07.368941       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:38:07.401141       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:38:07.401239       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:38:07.401294       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:38:07.403480       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:38:07.403940       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:38:07.404177       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:38:07.406344       1 config.go:197] "Starting service config controller"
	I0829 18:38:07.406436       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:38:07.406481       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:38:07.406508       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:38:07.408282       1 config.go:326] "Starting node config controller"
	I0829 18:38:07.408523       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:38:07.508871       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 18:38:07.508961       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:38:07.515116       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3f5fd38c54d3b71920e140657150875973b66dcab9493bd836fc64ab5cb4ebd6] <==
	W0829 18:37:50.943300       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.39:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:50.943451       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.39:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:51.026122       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.39:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:51.026284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.39:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:52.007919       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.39:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:52.008081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.39:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:52.254679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.39:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:52.254753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.39:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:52.291510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.39:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:52.291580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.39:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:53.051628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.39:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:53.051743       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.39:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:53.074744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.39:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:53.074940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.39:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:53.947460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.39:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:53.947634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.39:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:54.229020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.39:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:54.229249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.39:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:55.222984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.39:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:55.223284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.39:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:55.290880       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.39:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:55.291014       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.39:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:56.492479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.39:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:56.492543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.39:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	I0829 18:38:11.185147       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7] <==
	E0829 18:28:53.677627       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-h8k94\": pod busybox-7dff88458-h8k94 is already assigned to node \"ha-782425-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-h8k94" node="ha-782425-m03"
	E0829 18:28:53.677952       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-h8k94\": pod busybox-7dff88458-h8k94 is already assigned to node \"ha-782425-m03\"" pod="default/busybox-7dff88458-h8k94"
	E0829 18:28:53.695276       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vwgrt\": pod busybox-7dff88458-vwgrt is already assigned to node \"ha-782425\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vwgrt" node="ha-782425"
	E0829 18:28:53.695376       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0e10fff1-6582-4f04-a07b-bd664457f72d(default/busybox-7dff88458-vwgrt) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-vwgrt"
	E0829 18:28:53.695398       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vwgrt\": pod busybox-7dff88458-vwgrt is already assigned to node \"ha-782425\"" pod="default/busybox-7dff88458-vwgrt"
	I0829 18:28:53.695418       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-vwgrt" node="ha-782425"
	E0829 18:29:31.044983       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lbjt6\": pod kindnet-lbjt6 is already assigned to node \"ha-782425-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-lbjt6" node="ha-782425-m04"
	E0829 18:29:31.045106       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ee67d98e-b169-415c-ac85-e253e2888144(kube-system/kindnet-lbjt6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lbjt6"
	E0829 18:29:31.045132       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lbjt6\": pod kindnet-lbjt6 is already assigned to node \"ha-782425-m04\"" pod="kube-system/kindnet-lbjt6"
	I0829 18:29:31.045177       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lbjt6" node="ha-782425-m04"
	E0829 18:29:31.045921       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5xgbn\": pod kube-proxy-5xgbn is already assigned to node \"ha-782425-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5xgbn" node="ha-782425-m04"
	E0829 18:29:31.045987       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 278c58ce-3b1f-45c5-a1c9-0d2ce710f092(kube-system/kube-proxy-5xgbn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5xgbn"
	E0829 18:29:31.046008       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5xgbn\": pod kube-proxy-5xgbn is already assigned to node \"ha-782425-m04\"" pod="kube-system/kube-proxy-5xgbn"
	I0829 18:29:31.046027       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5xgbn" node="ha-782425-m04"
	E0829 18:35:27.567323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0829 18:35:28.202485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0829 18:35:30.066852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0829 18:35:32.286517       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0829 18:35:32.775523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0829 18:35:33.121047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0829 18:35:34.530054       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0829 18:35:34.982846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0829 18:35:35.077668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0829 18:35:35.901023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0829 18:35:35.939664       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 29 18:38:50 ha-782425 kubelet[1321]: I0829 18:38:50.743436    1321 scope.go:117] "RemoveContainer" containerID="31aff49f13b2ff05527a7ee6f76f09d89d7783ddf729d5954c6a2f37f5fcbdd7"
	Aug 29 18:38:50 ha-782425 kubelet[1321]: E0829 18:38:50.744075    1321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f41ebca1-035e-44b0-96a2-3aa1e794bc1f)\"" pod="kube-system/storage-provisioner" podUID="f41ebca1-035e-44b0-96a2-3aa1e794bc1f"
	Aug 29 18:38:51 ha-782425 kubelet[1321]: E0829 18:38:51.955460    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956731955147898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:38:51 ha-782425 kubelet[1321]: E0829 18:38:51.955501    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956731955147898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:38:53 ha-782425 kubelet[1321]: I0829 18:38:53.743192    1321 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-782425" podUID="83b3c3eb-b05b-47de-bc2a-ee1822b50b77"
	Aug 29 18:38:53 ha-782425 kubelet[1321]: I0829 18:38:53.762189    1321 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-782425"
	Aug 29 18:39:01 ha-782425 kubelet[1321]: E0829 18:39:01.957832    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956741957232456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:39:01 ha-782425 kubelet[1321]: E0829 18:39:01.958160    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956741957232456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:39:03 ha-782425 kubelet[1321]: I0829 18:39:03.743084    1321 scope.go:117] "RemoveContainer" containerID="31aff49f13b2ff05527a7ee6f76f09d89d7783ddf729d5954c6a2f37f5fcbdd7"
	Aug 29 18:39:03 ha-782425 kubelet[1321]: E0829 18:39:03.743291    1321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f41ebca1-035e-44b0-96a2-3aa1e794bc1f)\"" pod="kube-system/storage-provisioner" podUID="f41ebca1-035e-44b0-96a2-3aa1e794bc1f"
	Aug 29 18:39:11 ha-782425 kubelet[1321]: E0829 18:39:11.961330    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956751960742159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:39:11 ha-782425 kubelet[1321]: E0829 18:39:11.961606    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956751960742159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:39:16 ha-782425 kubelet[1321]: I0829 18:39:16.743380    1321 scope.go:117] "RemoveContainer" containerID="31aff49f13b2ff05527a7ee6f76f09d89d7783ddf729d5954c6a2f37f5fcbdd7"
	Aug 29 18:39:16 ha-782425 kubelet[1321]: E0829 18:39:16.743620    1321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f41ebca1-035e-44b0-96a2-3aa1e794bc1f)\"" pod="kube-system/storage-provisioner" podUID="f41ebca1-035e-44b0-96a2-3aa1e794bc1f"
	Aug 29 18:39:21 ha-782425 kubelet[1321]: E0829 18:39:21.762525    1321 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 18:39:21 ha-782425 kubelet[1321]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 18:39:21 ha-782425 kubelet[1321]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 18:39:21 ha-782425 kubelet[1321]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 18:39:21 ha-782425 kubelet[1321]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 18:39:21 ha-782425 kubelet[1321]: E0829 18:39:21.963848    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956761963491394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:39:21 ha-782425 kubelet[1321]: E0829 18:39:21.963874    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956761963491394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:39:30 ha-782425 kubelet[1321]: I0829 18:39:30.743079    1321 scope.go:117] "RemoveContainer" containerID="31aff49f13b2ff05527a7ee6f76f09d89d7783ddf729d5954c6a2f37f5fcbdd7"
	Aug 29 18:39:30 ha-782425 kubelet[1321]: E0829 18:39:30.743611    1321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f41ebca1-035e-44b0-96a2-3aa1e794bc1f)\"" pod="kube-system/storage-provisioner" podUID="f41ebca1-035e-44b0-96a2-3aa1e794bc1f"
	Aug 29 18:39:31 ha-782425 kubelet[1321]: E0829 18:39:31.966750    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956771965813768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:39:31 ha-782425 kubelet[1321]: E0829 18:39:31.967901    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956771965813768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 18:39:36.061490   39483 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19531-13056/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-782425 -n ha-782425
helpers_test.go:261: (dbg) Run:  kubectl --context ha-782425 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782425 stop -v=7 --alsologtostderr: exit status 82 (2m0.459102754s)

                                                
                                                
-- stdout --
	* Stopping node "ha-782425-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:39:54.967815   39877 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:39:54.968057   39877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:39:54.968070   39877 out.go:358] Setting ErrFile to fd 2...
	I0829 18:39:54.968074   39877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:39:54.968255   39877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:39:54.968454   39877 out.go:352] Setting JSON to false
	I0829 18:39:54.968524   39877 mustload.go:65] Loading cluster: ha-782425
	I0829 18:39:54.968872   39877 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:39:54.968958   39877 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:39:54.969130   39877 mustload.go:65] Loading cluster: ha-782425
	I0829 18:39:54.969247   39877 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:39:54.969268   39877 stop.go:39] StopHost: ha-782425-m04
	I0829 18:39:54.969667   39877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:39:54.969704   39877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:39:54.984424   39877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38931
	I0829 18:39:54.984847   39877 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:39:54.985343   39877 main.go:141] libmachine: Using API Version  1
	I0829 18:39:54.985364   39877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:39:54.985670   39877 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:39:54.988430   39877 out.go:177] * Stopping node "ha-782425-m04"  ...
	I0829 18:39:54.989612   39877 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 18:39:54.989640   39877 main.go:141] libmachine: (ha-782425-m04) Calling .DriverName
	I0829 18:39:54.989831   39877 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 18:39:54.989853   39877 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHHostname
	I0829 18:39:54.992622   39877 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:39:54.993058   39877 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:39:23 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:39:54.993081   39877 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:39:54.993211   39877 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHPort
	I0829 18:39:54.993371   39877 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHKeyPath
	I0829 18:39:54.993501   39877 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHUsername
	I0829 18:39:54.993621   39877 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m04/id_rsa Username:docker}
	I0829 18:39:55.075766   39877 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 18:39:55.127829   39877 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 18:39:55.179589   39877 main.go:141] libmachine: Stopping "ha-782425-m04"...
	I0829 18:39:55.179612   39877 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:39:55.181099   39877 main.go:141] libmachine: (ha-782425-m04) Calling .Stop
	I0829 18:39:55.184380   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 0/120
	I0829 18:39:56.185784   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 1/120
	I0829 18:39:57.187085   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 2/120
	I0829 18:39:58.188483   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 3/120
	I0829 18:39:59.189787   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 4/120
	I0829 18:40:00.191945   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 5/120
	I0829 18:40:01.193283   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 6/120
	I0829 18:40:02.194598   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 7/120
	I0829 18:40:03.196462   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 8/120
	I0829 18:40:04.197684   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 9/120
	I0829 18:40:05.198910   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 10/120
	I0829 18:40:06.200685   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 11/120
	I0829 18:40:07.202043   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 12/120
	I0829 18:40:08.203313   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 13/120
	I0829 18:40:09.204824   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 14/120
	I0829 18:40:10.206850   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 15/120
	I0829 18:40:11.208132   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 16/120
	I0829 18:40:12.210277   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 17/120
	I0829 18:40:13.211553   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 18/120
	I0829 18:40:14.212995   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 19/120
	I0829 18:40:15.215022   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 20/120
	I0829 18:40:16.216618   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 21/120
	I0829 18:40:17.217793   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 22/120
	I0829 18:40:18.219408   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 23/120
	I0829 18:40:19.220623   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 24/120
	I0829 18:40:20.222068   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 25/120
	I0829 18:40:21.223412   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 26/120
	I0829 18:40:22.224724   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 27/120
	I0829 18:40:23.226072   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 28/120
	I0829 18:40:24.227624   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 29/120
	I0829 18:40:25.229404   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 30/120
	I0829 18:40:26.231019   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 31/120
	I0829 18:40:27.232480   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 32/120
	I0829 18:40:28.233832   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 33/120
	I0829 18:40:29.235348   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 34/120
	I0829 18:40:30.237287   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 35/120
	I0829 18:40:31.238791   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 36/120
	I0829 18:40:32.240785   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 37/120
	I0829 18:40:33.242020   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 38/120
	I0829 18:40:34.244211   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 39/120
	I0829 18:40:35.246428   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 40/120
	I0829 18:40:36.247744   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 41/120
	I0829 18:40:37.249407   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 42/120
	I0829 18:40:38.250931   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 43/120
	I0829 18:40:39.252639   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 44/120
	I0829 18:40:40.254774   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 45/120
	I0829 18:40:41.256495   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 46/120
	I0829 18:40:42.257846   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 47/120
	I0829 18:40:43.259154   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 48/120
	I0829 18:40:44.260924   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 49/120
	I0829 18:40:45.262806   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 50/120
	I0829 18:40:46.264044   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 51/120
	I0829 18:40:47.265584   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 52/120
	I0829 18:40:48.267636   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 53/120
	I0829 18:40:49.269093   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 54/120
	I0829 18:40:50.270880   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 55/120
	I0829 18:40:51.272931   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 56/120
	I0829 18:40:52.274555   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 57/120
	I0829 18:40:53.276768   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 58/120
	I0829 18:40:54.278176   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 59/120
	I0829 18:40:55.280288   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 60/120
	I0829 18:40:56.281807   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 61/120
	I0829 18:40:57.283234   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 62/120
	I0829 18:40:58.285196   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 63/120
	I0829 18:40:59.287395   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 64/120
	I0829 18:41:00.289116   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 65/120
	I0829 18:41:01.290969   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 66/120
	I0829 18:41:02.292371   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 67/120
	I0829 18:41:03.293873   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 68/120
	I0829 18:41:04.295260   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 69/120
	I0829 18:41:05.297366   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 70/120
	I0829 18:41:06.298920   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 71/120
	I0829 18:41:07.300769   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 72/120
	I0829 18:41:08.302239   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 73/120
	I0829 18:41:09.303528   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 74/120
	I0829 18:41:10.305531   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 75/120
	I0829 18:41:11.307074   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 76/120
	I0829 18:41:12.308690   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 77/120
	I0829 18:41:13.310145   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 78/120
	I0829 18:41:14.311772   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 79/120
	I0829 18:41:15.313977   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 80/120
	I0829 18:41:16.315556   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 81/120
	I0829 18:41:17.317193   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 82/120
	I0829 18:41:18.318773   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 83/120
	I0829 18:41:19.320108   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 84/120
	I0829 18:41:20.322202   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 85/120
	I0829 18:41:21.323539   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 86/120
	I0829 18:41:22.324888   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 87/120
	I0829 18:41:23.326433   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 88/120
	I0829 18:41:24.328566   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 89/120
	I0829 18:41:25.330886   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 90/120
	I0829 18:41:26.332644   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 91/120
	I0829 18:41:27.334184   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 92/120
	I0829 18:41:28.335666   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 93/120
	I0829 18:41:29.337262   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 94/120
	I0829 18:41:30.339146   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 95/120
	I0829 18:41:31.340685   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 96/120
	I0829 18:41:32.342179   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 97/120
	I0829 18:41:33.343494   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 98/120
	I0829 18:41:34.345058   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 99/120
	I0829 18:41:35.347317   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 100/120
	I0829 18:41:36.348862   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 101/120
	I0829 18:41:37.350368   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 102/120
	I0829 18:41:38.351631   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 103/120
	I0829 18:41:39.353054   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 104/120
	I0829 18:41:40.354811   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 105/120
	I0829 18:41:41.356477   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 106/120
	I0829 18:41:42.357923   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 107/120
	I0829 18:41:43.359289   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 108/120
	I0829 18:41:44.360614   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 109/120
	I0829 18:41:45.362819   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 110/120
	I0829 18:41:46.364227   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 111/120
	I0829 18:41:47.365705   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 112/120
	I0829 18:41:48.367061   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 113/120
	I0829 18:41:49.368660   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 114/120
	I0829 18:41:50.370702   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 115/120
	I0829 18:41:51.372632   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 116/120
	I0829 18:41:52.374009   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 117/120
	I0829 18:41:53.375469   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 118/120
	I0829 18:41:54.376904   39877 main.go:141] libmachine: (ha-782425-m04) Waiting for machine to stop 119/120
	I0829 18:41:55.378132   39877 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0829 18:41:55.378204   39877 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0829 18:41:55.379916   39877 out.go:201] 
	W0829 18:41:55.380905   39877 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0829 18:41:55.380919   39877 out.go:270] * 
	* 
	W0829 18:41:55.383435   39877 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 18:41:55.384452   39877 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-782425 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr: exit status 3 (18.984915417s)

                                                
                                                
-- stdout --
	ha-782425
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782425-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:41:55.428200   40328 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:41:55.428317   40328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:41:55.428327   40328 out.go:358] Setting ErrFile to fd 2...
	I0829 18:41:55.428332   40328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:41:55.428542   40328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:41:55.428729   40328 out.go:352] Setting JSON to false
	I0829 18:41:55.428774   40328 mustload.go:65] Loading cluster: ha-782425
	I0829 18:41:55.428808   40328 notify.go:220] Checking for updates...
	I0829 18:41:55.429202   40328 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:41:55.429216   40328 status.go:255] checking status of ha-782425 ...
	I0829 18:41:55.429643   40328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:41:55.429706   40328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:41:55.452741   40328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45039
	I0829 18:41:55.453163   40328 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:41:55.453752   40328 main.go:141] libmachine: Using API Version  1
	I0829 18:41:55.453784   40328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:41:55.454100   40328 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:41:55.454340   40328 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:41:55.455769   40328 status.go:330] ha-782425 host status = "Running" (err=<nil>)
	I0829 18:41:55.455785   40328 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:41:55.456117   40328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:41:55.456156   40328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:41:55.470581   40328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36111
	I0829 18:41:55.470986   40328 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:41:55.471395   40328 main.go:141] libmachine: Using API Version  1
	I0829 18:41:55.471413   40328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:41:55.471722   40328 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:41:55.471880   40328 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:41:55.474737   40328 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:41:55.475246   40328 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:41:55.475283   40328 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:41:55.475447   40328 host.go:66] Checking if "ha-782425" exists ...
	I0829 18:41:55.475767   40328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:41:55.475812   40328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:41:55.490645   40328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38219
	I0829 18:41:55.491045   40328 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:41:55.491544   40328 main.go:141] libmachine: Using API Version  1
	I0829 18:41:55.491571   40328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:41:55.491878   40328 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:41:55.492057   40328 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:41:55.492219   40328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:41:55.492249   40328 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:41:55.494727   40328 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:41:55.495142   40328 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:41:55.495181   40328 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:41:55.495289   40328 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:41:55.495464   40328 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:41:55.495612   40328 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:41:55.495739   40328 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:41:55.579266   40328 ssh_runner.go:195] Run: systemctl --version
	I0829 18:41:55.586106   40328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:41:55.602389   40328 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:41:55.602428   40328 api_server.go:166] Checking apiserver status ...
	I0829 18:41:55.602461   40328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:41:55.617875   40328 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4988/cgroup
	W0829 18:41:55.626924   40328 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4988/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:41:55.626981   40328 ssh_runner.go:195] Run: ls
	I0829 18:41:55.631269   40328 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:41:55.635906   40328 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:41:55.635923   40328 status.go:422] ha-782425 apiserver status = Running (err=<nil>)
	I0829 18:41:55.635933   40328 status.go:257] ha-782425 status: &{Name:ha-782425 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:41:55.635967   40328 status.go:255] checking status of ha-782425-m02 ...
	I0829 18:41:55.636278   40328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:41:55.636319   40328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:41:55.650772   40328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38747
	I0829 18:41:55.651199   40328 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:41:55.651634   40328 main.go:141] libmachine: Using API Version  1
	I0829 18:41:55.651651   40328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:41:55.652034   40328 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:41:55.652201   40328 main.go:141] libmachine: (ha-782425-m02) Calling .GetState
	I0829 18:41:55.653874   40328 status.go:330] ha-782425-m02 host status = "Running" (err=<nil>)
	I0829 18:41:55.653889   40328 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:41:55.654222   40328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:41:55.654262   40328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:41:55.668502   40328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0829 18:41:55.668875   40328 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:41:55.669271   40328 main.go:141] libmachine: Using API Version  1
	I0829 18:41:55.669298   40328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:41:55.669602   40328 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:41:55.669766   40328 main.go:141] libmachine: (ha-782425-m02) Calling .GetIP
	I0829 18:41:55.672361   40328 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:41:55.672754   40328 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:37:23 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:41:55.672777   40328 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:41:55.672924   40328 host.go:66] Checking if "ha-782425-m02" exists ...
	I0829 18:41:55.673214   40328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:41:55.673245   40328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:41:55.687855   40328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I0829 18:41:55.688190   40328 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:41:55.688616   40328 main.go:141] libmachine: Using API Version  1
	I0829 18:41:55.688633   40328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:41:55.688910   40328 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:41:55.689087   40328 main.go:141] libmachine: (ha-782425-m02) Calling .DriverName
	I0829 18:41:55.689274   40328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:41:55.689297   40328 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHHostname
	I0829 18:41:55.691757   40328 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:41:55.692113   40328 main.go:141] libmachine: (ha-782425-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:79:c5", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:37:23 +0000 UTC Type:0 Mac:52:54:00:15:79:c5 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-782425-m02 Clientid:01:52:54:00:15:79:c5}
	I0829 18:41:55.692142   40328 main.go:141] libmachine: (ha-782425-m02) DBG | domain ha-782425-m02 has defined IP address 192.168.39.253 and MAC address 52:54:00:15:79:c5 in network mk-ha-782425
	I0829 18:41:55.692269   40328 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHPort
	I0829 18:41:55.692409   40328 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHKeyPath
	I0829 18:41:55.692524   40328 main.go:141] libmachine: (ha-782425-m02) Calling .GetSSHUsername
	I0829 18:41:55.692644   40328 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m02/id_rsa Username:docker}
	I0829 18:41:55.778888   40328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:41:55.796472   40328 kubeconfig.go:125] found "ha-782425" server: "https://192.168.39.254:8443"
	I0829 18:41:55.796508   40328 api_server.go:166] Checking apiserver status ...
	I0829 18:41:55.796556   40328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:41:55.811792   40328 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup
	W0829 18:41:55.822024   40328 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1522/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:41:55.822076   40328 ssh_runner.go:195] Run: ls
	I0829 18:41:55.825859   40328 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0829 18:41:55.830446   40328 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0829 18:41:55.830479   40328 status.go:422] ha-782425-m02 apiserver status = Running (err=<nil>)
	I0829 18:41:55.830487   40328 status.go:257] ha-782425-m02 status: &{Name:ha-782425-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:41:55.830504   40328 status.go:255] checking status of ha-782425-m04 ...
	I0829 18:41:55.830829   40328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:41:55.830874   40328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:41:55.845611   40328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0829 18:41:55.846016   40328 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:41:55.846598   40328 main.go:141] libmachine: Using API Version  1
	I0829 18:41:55.846626   40328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:41:55.847000   40328 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:41:55.847174   40328 main.go:141] libmachine: (ha-782425-m04) Calling .GetState
	I0829 18:41:55.848817   40328 status.go:330] ha-782425-m04 host status = "Running" (err=<nil>)
	I0829 18:41:55.848837   40328 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:41:55.849107   40328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:41:55.849144   40328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:41:55.864159   40328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37543
	I0829 18:41:55.864522   40328 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:41:55.864970   40328 main.go:141] libmachine: Using API Version  1
	I0829 18:41:55.864990   40328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:41:55.865269   40328 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:41:55.865471   40328 main.go:141] libmachine: (ha-782425-m04) Calling .GetIP
	I0829 18:41:55.868234   40328 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:41:55.868647   40328 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:39:23 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:41:55.868671   40328 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:41:55.868856   40328 host.go:66] Checking if "ha-782425-m04" exists ...
	I0829 18:41:55.869245   40328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:41:55.869288   40328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:41:55.884208   40328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I0829 18:41:55.884631   40328 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:41:55.885047   40328 main.go:141] libmachine: Using API Version  1
	I0829 18:41:55.885070   40328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:41:55.885365   40328 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:41:55.885531   40328 main.go:141] libmachine: (ha-782425-m04) Calling .DriverName
	I0829 18:41:55.885734   40328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:41:55.885757   40328 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHHostname
	I0829 18:41:55.888255   40328 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:41:55.888615   40328 main.go:141] libmachine: (ha-782425-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:74:46", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:39:23 +0000 UTC Type:0 Mac:52:54:00:f1:74:46 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:ha-782425-m04 Clientid:01:52:54:00:f1:74:46}
	I0829 18:41:55.888645   40328 main.go:141] libmachine: (ha-782425-m04) DBG | domain ha-782425-m04 has defined IP address 192.168.39.235 and MAC address 52:54:00:f1:74:46 in network mk-ha-782425
	I0829 18:41:55.888749   40328 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHPort
	I0829 18:41:55.888904   40328 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHKeyPath
	I0829 18:41:55.889049   40328 main.go:141] libmachine: (ha-782425-m04) Calling .GetSSHUsername
	I0829 18:41:55.889183   40328 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425-m04/id_rsa Username:docker}
	W0829 18:42:14.370307   40328 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.235:22: connect: no route to host
	W0829 18:42:14.370390   40328 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	E0829 18:42:14.370406   40328 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	I0829 18:42:14.370416   40328 status.go:257] ha-782425-m04 status: &{Name:ha-782425-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0829 18:42:14.370435   40328 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-782425 -n ha-782425
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-782425 logs -n 25: (1.610294744s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-782425 ssh -n ha-782425-m02 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m03_ha-782425-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04:/home/docker/cp-test_ha-782425-m03_ha-782425-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m04 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m03_ha-782425-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-782425 cp testdata/cp-test.txt                                                | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1158605446/001/cp-test_ha-782425-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425:/home/docker/cp-test_ha-782425-m04_ha-782425.txt                       |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425 sudo cat                                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m04_ha-782425.txt                                 |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m02:/home/docker/cp-test_ha-782425-m04_ha-782425-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m02 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m04_ha-782425-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m03:/home/docker/cp-test_ha-782425-m04_ha-782425-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n                                                                 | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | ha-782425-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-782425 ssh -n ha-782425-m03 sudo cat                                          | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC | 29 Aug 24 18:30 UTC |
	|         | /home/docker/cp-test_ha-782425-m04_ha-782425-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-782425 node stop m02 -v=7                                                     | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-782425 node start m02 -v=7                                                    | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-782425 -v=7                                                           | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-782425 -v=7                                                                | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-782425 --wait=true -v=7                                                    | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:35 UTC | 29 Aug 24 18:39 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-782425                                                                | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:39 UTC |                     |
	| node    | ha-782425 node delete m03 -v=7                                                   | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:39 UTC | 29 Aug 24 18:39 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-782425 stop -v=7                                                              | ha-782425 | jenkins | v1.33.1 | 29 Aug 24 18:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:35:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:35:35.094293   38130 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:35:35.094416   38130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:35:35.094428   38130 out.go:358] Setting ErrFile to fd 2...
	I0829 18:35:35.094435   38130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:35:35.094679   38130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:35:35.095349   38130 out.go:352] Setting JSON to false
	I0829 18:35:35.096524   38130 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4682,"bootTime":1724951853,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:35:35.096588   38130 start.go:139] virtualization: kvm guest
	I0829 18:35:35.098697   38130 out.go:177] * [ha-782425] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:35:35.100174   38130 notify.go:220] Checking for updates...
	I0829 18:35:35.100249   38130 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:35:35.101742   38130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:35:35.103064   38130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:35:35.104323   38130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:35:35.105553   38130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:35:35.106702   38130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:35:35.108193   38130 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:35:35.108300   38130 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:35:35.108913   38130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:35:35.108970   38130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:35:35.124238   38130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0829 18:35:35.124678   38130 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:35:35.125208   38130 main.go:141] libmachine: Using API Version  1
	I0829 18:35:35.125227   38130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:35:35.125527   38130 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:35:35.125694   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:35:35.160928   38130 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 18:35:35.162147   38130 start.go:297] selected driver: kvm2
	I0829 18:35:35.162163   38130 start.go:901] validating driver "kvm2" against &{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.235 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:35:35.162338   38130 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:35:35.162644   38130 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:35:35.162721   38130 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:35:35.177483   38130 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:35:35.178388   38130 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:35:35.178475   38130 cni.go:84] Creating CNI manager for ""
	I0829 18:35:35.178491   38130 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0829 18:35:35.178555   38130 start.go:340] cluster config:
	{Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.235 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:35:35.178724   38130 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:35:35.180660   38130 out.go:177] * Starting "ha-782425" primary control-plane node in "ha-782425" cluster
	I0829 18:35:35.181854   38130 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:35:35.181887   38130 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:35:35.181894   38130 cache.go:56] Caching tarball of preloaded images
	I0829 18:35:35.181956   38130 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:35:35.181966   38130 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:35:35.182074   38130 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/config.json ...
	I0829 18:35:35.182290   38130 start.go:360] acquireMachinesLock for ha-782425: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:35:35.182357   38130 start.go:364] duration metric: took 49.226µs to acquireMachinesLock for "ha-782425"
	I0829 18:35:35.182371   38130 start.go:96] Skipping create...Using existing machine configuration
	I0829 18:35:35.182376   38130 fix.go:54] fixHost starting: 
	I0829 18:35:35.182641   38130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:35:35.182670   38130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:35:35.197637   38130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38751
	I0829 18:35:35.198027   38130 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:35:35.198631   38130 main.go:141] libmachine: Using API Version  1
	I0829 18:35:35.198659   38130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:35:35.198997   38130 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:35:35.199234   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:35:35.199426   38130 main.go:141] libmachine: (ha-782425) Calling .GetState
	I0829 18:35:35.200995   38130 fix.go:112] recreateIfNeeded on ha-782425: state=Running err=<nil>
	W0829 18:35:35.201014   38130 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 18:35:35.202798   38130 out.go:177] * Updating the running kvm2 "ha-782425" VM ...
	I0829 18:35:35.204027   38130 machine.go:93] provisionDockerMachine start ...
	I0829 18:35:35.204054   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:35:35.204238   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:35:35.206531   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.206918   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.206945   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.207060   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:35:35.207249   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.207392   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.207535   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:35:35.207740   38130 main.go:141] libmachine: Using SSH client type: native
	I0829 18:35:35.207926   38130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:35:35.207936   38130 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 18:35:35.318798   38130 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-782425
	
	I0829 18:35:35.318825   38130 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:35:35.319091   38130 buildroot.go:166] provisioning hostname "ha-782425"
	I0829 18:35:35.319114   38130 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:35:35.319296   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:35:35.321974   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.322391   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.322427   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.322522   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:35:35.322700   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.322867   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.323100   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:35:35.323286   38130 main.go:141] libmachine: Using SSH client type: native
	I0829 18:35:35.323472   38130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:35:35.323493   38130 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-782425 && echo "ha-782425" | sudo tee /etc/hostname
	I0829 18:35:35.448806   38130 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-782425
	
	I0829 18:35:35.448837   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:35:35.451650   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.452049   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.452075   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.452253   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:35:35.452447   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.452609   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.452727   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:35:35.452881   38130 main.go:141] libmachine: Using SSH client type: native
	I0829 18:35:35.453080   38130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:35:35.453099   38130 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-782425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-782425/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-782425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:35:35.566817   38130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:35:35.566843   38130 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:35:35.566874   38130 buildroot.go:174] setting up certificates
	I0829 18:35:35.566886   38130 provision.go:84] configureAuth start
	I0829 18:35:35.566902   38130 main.go:141] libmachine: (ha-782425) Calling .GetMachineName
	I0829 18:35:35.567150   38130 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:35:35.569710   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.570061   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.570102   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.570266   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:35:35.572471   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.572825   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.572853   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.572961   38130 provision.go:143] copyHostCerts
	I0829 18:35:35.572990   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:35:35.573027   38130 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 18:35:35.573043   38130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:35:35.573104   38130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:35:35.573186   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:35:35.573204   38130 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 18:35:35.573208   38130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:35:35.573230   38130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:35:35.573281   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:35:35.573299   38130 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 18:35:35.573302   38130 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:35:35.573322   38130 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:35:35.573382   38130 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.ha-782425 san=[127.0.0.1 192.168.39.39 ha-782425 localhost minikube]
	I0829 18:35:35.660260   38130 provision.go:177] copyRemoteCerts
	I0829 18:35:35.660322   38130 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:35:35.660343   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:35:35.662854   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.663213   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.663239   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.663424   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:35:35.663604   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.663746   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:35:35.663877   38130 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:35:35.748557   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 18:35:35.748632   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:35:35.774522   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 18:35:35.774604   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0829 18:35:35.802420   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 18:35:35.802488   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 18:35:35.827873   38130 provision.go:87] duration metric: took 260.972399ms to configureAuth
	I0829 18:35:35.827898   38130 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:35:35.828112   38130 config.go:182] Loaded profile config "ha-782425": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:35:35.828174   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:35:35.830937   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.831288   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:35:35.831326   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:35:35.831524   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:35:35.831721   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.831864   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:35:35.832001   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:35:35.832152   38130 main.go:141] libmachine: Using SSH client type: native
	I0829 18:35:35.832321   38130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:35:35.832354   38130 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 18:37:06.632618   38130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 18:37:06.632645   38130 machine.go:96] duration metric: took 1m31.428598655s to provisionDockerMachine
	I0829 18:37:06.632658   38130 start.go:293] postStartSetup for "ha-782425" (driver="kvm2")
	I0829 18:37:06.632670   38130 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:37:06.632685   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:37:06.632999   38130 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:37:06.633028   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:37:06.636076   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.636641   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:06.636663   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.636819   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:37:06.637070   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:37:06.637222   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:37:06.637387   38130 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:37:06.724847   38130 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:37:06.728820   38130 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 18:37:06.728845   38130 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 18:37:06.728907   38130 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 18:37:06.729018   38130 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 18:37:06.729032   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /etc/ssl/certs/202592.pem
	I0829 18:37:06.729144   38130 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 18:37:06.739337   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:37:06.762335   38130 start.go:296] duration metric: took 129.660855ms for postStartSetup
	I0829 18:37:06.762380   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:37:06.762707   38130 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0829 18:37:06.762732   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:37:06.765548   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.765926   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:06.765951   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.766157   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:37:06.766350   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:37:06.766509   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:37:06.766664   38130 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	W0829 18:37:06.847860   38130 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0829 18:37:06.847893   38130 fix.go:56] duration metric: took 1m31.665516475s for fixHost
	I0829 18:37:06.847919   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:37:06.850431   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.850823   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:06.850849   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.850959   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:37:06.851137   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:37:06.851248   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:37:06.851400   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:37:06.851568   38130 main.go:141] libmachine: Using SSH client type: native
	I0829 18:37:06.851787   38130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0829 18:37:06.851801   38130 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 18:37:06.962643   38130 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724956626.917922794
	
	I0829 18:37:06.962669   38130 fix.go:216] guest clock: 1724956626.917922794
	I0829 18:37:06.962681   38130 fix.go:229] Guest: 2024-08-29 18:37:06.917922794 +0000 UTC Remote: 2024-08-29 18:37:06.847901124 +0000 UTC m=+91.789559535 (delta=70.02167ms)
	I0829 18:37:06.962708   38130 fix.go:200] guest clock delta is within tolerance: 70.02167ms
	I0829 18:37:06.962718   38130 start.go:83] releasing machines lock for "ha-782425", held for 1m31.780350669s
	I0829 18:37:06.962748   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:37:06.963013   38130 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:37:06.965215   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.965584   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:06.965610   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.965803   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:37:06.966366   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:37:06.966537   38130 main.go:141] libmachine: (ha-782425) Calling .DriverName
	I0829 18:37:06.966630   38130 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:37:06.966674   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:37:06.966734   38130 ssh_runner.go:195] Run: cat /version.json
	I0829 18:37:06.966756   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHHostname
	I0829 18:37:06.969172   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.969204   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.969538   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:06.969561   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.969600   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:06.969620   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:06.969675   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:37:06.969859   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHPort
	I0829 18:37:06.969861   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:37:06.970044   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:37:06.970046   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHKeyPath
	I0829 18:37:06.970230   38130 main.go:141] libmachine: (ha-782425) Calling .GetSSHUsername
	I0829 18:37:06.970245   38130 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:37:06.970353   38130 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/ha-782425/id_rsa Username:docker}
	I0829 18:37:07.109014   38130 ssh_runner.go:195] Run: systemctl --version
	I0829 18:37:07.115576   38130 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 18:37:07.274740   38130 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 18:37:07.283660   38130 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 18:37:07.283729   38130 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:37:07.293055   38130 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 18:37:07.293079   38130 start.go:495] detecting cgroup driver to use...
	I0829 18:37:07.293137   38130 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 18:37:07.309980   38130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 18:37:07.324647   38130 docker.go:217] disabling cri-docker service (if available) ...
	I0829 18:37:07.324737   38130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 18:37:07.338703   38130 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 18:37:07.354049   38130 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 18:37:07.504773   38130 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 18:37:07.652012   38130 docker.go:233] disabling docker service ...
	I0829 18:37:07.652076   38130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 18:37:07.668406   38130 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 18:37:07.681988   38130 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 18:37:07.827358   38130 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 18:37:07.970168   38130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 18:37:07.984429   38130 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:37:08.003178   38130 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 18:37:08.003247   38130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.014177   38130 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 18:37:08.014238   38130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.024932   38130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.036897   38130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.047166   38130 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:37:08.057641   38130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.068105   38130 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.081031   38130 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 18:37:08.091246   38130 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:37:08.100430   38130 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:37:08.109910   38130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:37:08.255675   38130 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 18:37:12.170058   38130 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.914338259s)
	I0829 18:37:12.170099   38130 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 18:37:12.170149   38130 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 18:37:12.174849   38130 start.go:563] Will wait 60s for crictl version
	I0829 18:37:12.174892   38130 ssh_runner.go:195] Run: which crictl
	I0829 18:37:12.178204   38130 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:37:12.214459   38130 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 18:37:12.214540   38130 ssh_runner.go:195] Run: crio --version
	I0829 18:37:12.242960   38130 ssh_runner.go:195] Run: crio --version
	I0829 18:37:12.271960   38130 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 18:37:12.273405   38130 main.go:141] libmachine: (ha-782425) Calling .GetIP
	I0829 18:37:12.275817   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:12.276142   38130 main.go:141] libmachine: (ha-782425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:37:dc", ip: ""} in network mk-ha-782425: {Iface:virbr1 ExpiryTime:2024-08-29 19:25:51 +0000 UTC Type:0 Mac:52:54:00:4e:37:dc Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-782425 Clientid:01:52:54:00:4e:37:dc}
	I0829 18:37:12.276166   38130 main.go:141] libmachine: (ha-782425) DBG | domain ha-782425 has defined IP address 192.168.39.39 and MAC address 52:54:00:4e:37:dc in network mk-ha-782425
	I0829 18:37:12.276386   38130 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 18:37:12.280745   38130 kubeadm.go:883] updating cluster {Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.235 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:37:12.280942   38130 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:37:12.281003   38130 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:37:12.321741   38130 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:37:12.321760   38130 crio.go:433] Images already preloaded, skipping extraction
	I0829 18:37:12.321800   38130 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 18:37:12.357183   38130 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 18:37:12.357200   38130 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:37:12.357208   38130 kubeadm.go:934] updating node { 192.168.39.39 8443 v1.31.0 crio true true} ...
	I0829 18:37:12.357335   38130 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-782425 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:37:12.357432   38130 ssh_runner.go:195] Run: crio config
	I0829 18:37:12.402538   38130 cni.go:84] Creating CNI manager for ""
	I0829 18:37:12.402564   38130 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0829 18:37:12.402595   38130 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:37:12.402627   38130 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.39 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-782425 NodeName:ha-782425 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:37:12.402779   38130 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-782425"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:37:12.402795   38130 kube-vip.go:115] generating kube-vip config ...
	I0829 18:37:12.402834   38130 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0829 18:37:12.414324   38130 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0829 18:37:12.414474   38130 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0829 18:37:12.414543   38130 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:37:12.423866   38130 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:37:12.423938   38130 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0829 18:37:12.433054   38130 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0829 18:37:12.450034   38130 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:37:12.466019   38130 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0829 18:37:12.481895   38130 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0829 18:37:12.500904   38130 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0829 18:37:12.504744   38130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:37:12.647294   38130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:37:12.661336   38130 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425 for IP: 192.168.39.39
	I0829 18:37:12.661359   38130 certs.go:194] generating shared ca certs ...
	I0829 18:37:12.661378   38130 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:37:12.661537   38130 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 18:37:12.661592   38130 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 18:37:12.661606   38130 certs.go:256] generating profile certs ...
	I0829 18:37:12.661702   38130 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/client.key
	I0829 18:37:12.661736   38130 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.aa9a4721
	I0829 18:37:12.661763   38130 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.aa9a4721 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.39 192.168.39.253 192.168.39.220 192.168.39.254]
	I0829 18:37:12.721553   38130 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.aa9a4721 ...
	I0829 18:37:12.721584   38130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.aa9a4721: {Name:mkae0fb68c3921a8e6389bf55233edae9c484b55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:37:12.721767   38130 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.aa9a4721 ...
	I0829 18:37:12.721783   38130 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.aa9a4721: {Name:mkfc0e4e7d4b044277a1f2550ca717ba5e4c6653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:37:12.721874   38130 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt.aa9a4721 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt
	I0829 18:37:12.722047   38130 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key.aa9a4721 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key
	I0829 18:37:12.722216   38130 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key
	I0829 18:37:12.722235   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 18:37:12.722253   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 18:37:12.722273   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 18:37:12.722292   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 18:37:12.722311   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 18:37:12.722336   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 18:37:12.722360   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 18:37:12.722378   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 18:37:12.722445   38130 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 18:37:12.722495   38130 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 18:37:12.722510   38130 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:37:12.722542   38130 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 18:37:12.722577   38130 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:37:12.722624   38130 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 18:37:12.722692   38130 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 18:37:12.722735   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /usr/share/ca-certificates/202592.pem
	I0829 18:37:12.722760   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:37:12.722778   38130 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem -> /usr/share/ca-certificates/20259.pem
	I0829 18:37:12.723322   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:37:12.747935   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:37:12.770946   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:37:12.793697   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:37:12.815754   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 18:37:12.837298   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:37:12.859152   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:37:12.882330   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/ha-782425/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 18:37:12.904265   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 18:37:12.926027   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:37:12.949376   38130 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 18:37:12.971128   38130 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:37:12.987090   38130 ssh_runner.go:195] Run: openssl version
	I0829 18:37:12.992736   38130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 18:37:13.003053   38130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 18:37:13.007151   38130 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 18:37:13.007198   38130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 18:37:13.012497   38130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 18:37:13.021466   38130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:37:13.031542   38130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:37:13.035694   38130 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:37:13.035771   38130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:37:13.041212   38130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:37:13.050667   38130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 18:37:13.061349   38130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 18:37:13.065275   38130 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 18:37:13.065333   38130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 18:37:13.070551   38130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 18:37:13.096933   38130 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:37:13.114208   38130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 18:37:13.124544   38130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 18:37:13.131351   38130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 18:37:13.137142   38130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 18:37:13.145443   38130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 18:37:13.158345   38130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 18:37:13.183192   38130 kubeadm.go:392] StartCluster: {Name:ha-782425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-782425 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.235 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:37:13.183304   38130 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 18:37:13.183396   38130 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 18:37:13.385010   38130 cri.go:89] found id: "450cc9d333192a050ee909372d05ad41a7242c093e83aafcf4e11dc2de735d10"
	I0829 18:37:13.385040   38130 cri.go:89] found id: "767087c78fa49bd5c1e4737317c00b8963261061039db2412620080ab784d984"
	I0829 18:37:13.385046   38130 cri.go:89] found id: "d6702bcf56ba304efd93a1f2eaac34664bb61926ecb61581099b71b28ed8cc90"
	I0829 18:37:13.385050   38130 cri.go:89] found id: "519a79c3fb1fe04e97738d1eb203c5fd726d83556a4664704ac9fd4f716b0811"
	I0829 18:37:13.385054   38130 cri.go:89] found id: "409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902"
	I0829 18:37:13.385059   38130 cri.go:89] found id: "4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c"
	I0829 18:37:13.385062   38130 cri.go:89] found id: "23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c"
	I0829 18:37:13.385065   38130 cri.go:89] found id: "2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d"
	I0829 18:37:13.385067   38130 cri.go:89] found id: "216684e1555951dcb1c3a39517bf4a8c25da68c22cb5dd013a12ce46d50ed3c4"
	I0829 18:37:13.385072   38130 cri.go:89] found id: "5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240"
	I0829 18:37:13.385075   38130 cri.go:89] found id: "a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7"
	I0829 18:37:13.385091   38130 cri.go:89] found id: "24877a3e0c79c9cda3862a8eec226d8ab981fed9522707a5e23bf3114832c434"
	I0829 18:37:13.385095   38130 cri.go:89] found id: "33ef8a4b863ba396265906bfb135c4d78e0ec7bc4e4863880325735a054ce292"
	I0829 18:37:13.385101   38130 cri.go:89] found id: ""
	I0829 18:37:13.385149   38130 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 29 18:42:14 ha-782425 crio[3720]: time="2024-08-29 18:42:14.972377387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956934972347304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51a968c9-9763-4505-b73d-6fc866b39b13 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:42:14 ha-782425 crio[3720]: time="2024-08-29 18:42:14.973251904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6da4eb2-265d-4867-964b-8e776cdd849a name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:42:14 ha-782425 crio[3720]: time="2024-08-29 18:42:14.973459848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6da4eb2-265d-4867-964b-8e776cdd849a name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:42:14 ha-782425 crio[3720]: time="2024-08-29 18:42:14.974147483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c718c58f65081dc2e5aadcaabc17e9028807a591491ad13742b21bd4d36331b,PodSandboxId:e87602d948838dec6486139be09ec85598676d7b714cb7b0f4315fcb42a24b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724956804754864221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61da642e1b6a90457768c8a2a29f25d6d784c179cce01ab22de265bc05135898,PodSandboxId:a25f91827c1298c776d43d452fbcc51e09c3c6e8437d813e27cfb8fcf0074ce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956677815596945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d16af179676dd4557955b012625899572775e6f6ac44735c76aeddd44d3fdf,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724956676755763508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402ee13d72501a06994d139fc6a83416288cf463f25d5e0754da33b8dc4858c,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724956673762499415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99640f096bde6522770020b2249c41e85682111e94d36b9cd851593863a8ef29,PodSandboxId:60e047cd78823a50ad609bf0e147de8e256b54ee232893b23e18a36b5601fe9e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724956645211094274,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b999853b85936b403e953d43f9f09979,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:078060aad94315c04fe29c791ec93d09a6348a7fe30a8bc10a303ac96c6b4b65,PodSandboxId:1b441512c4428695d913c9f4a6d0e4801ed79c1cf1f2d727c91fc539ed988656,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724956644147256399,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:858c007f01133cbf6f7e4611b5e18e7f05cbad7f18965bf933daf93cf588cb5a,PodSandboxId:312c52b155c2894017c4d59c923caa4eec4f963377bd5795dcddaacfe652acb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956644036103274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32483de13691d4ca5512c75cbdc46a6350bdaee89c5ce7d051aed08f2629a7d6,PodSandboxId:98c171db62011a31f1e5b96fdd7ac555f5fc1756d6f17abf00c7bdb02bbf77a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724956643955850912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90d03cc166362aceb73a093f198bb289fefbf7e462ddde7900a5a80ebce98ab,PodSandboxId:91f0d779161a7bd935d92644834529cce86bfbdcf46737c08dd1c7d3bfa4016e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956640906958569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f4da722ec6c4acff782a3123b188313d8f86294083e6a37da8d9c6b02b7d4a,PodSandboxId:b3d37d9275f30d0fd89144cac3db73cc309f7cc735e96b7e24d80d8f8889f814,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724956634268028784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aff49f13b2ff05527a7ee6f76f09d89d7783ddf729d5954c6a2f37f5fcbdd7,PodSandboxId:e87602d948838dec6486139be09ec85598676d7b714cb7b0f4315fcb42a24b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724956633504160407,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41eb
ca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5fd38c54d3b71920e140657150875973b66dcab9493bd836fc64ab5cb4ebd6,PodSandboxId:15ef2a08cc118654fb489a2c672412210856b02d25a475acc47023b724dee08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724956633453755653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c
492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f72905aa04f170f74e67c3788c00d12a114862e321da6fe526e6d46be461c5a,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724956633400000614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd6df1c8c5b3e8bf24de8ef655497a92c5bff062a43d01179a4d86f7a2347c8,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724956633255170031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724956137320713034,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999481973684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999444863415,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724955987639328499,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724955984165847613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724955973084150044,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724955973067497563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6da4eb2-265d-4867-964b-8e776cdd849a name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.028096133Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e39014d-e4a2-43e0-8923-f398e92e5320 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.028219507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e39014d-e4a2-43e0-8923-f398e92e5320 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.030283319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7a77617-ef5e-417d-9127-3f5f3f79b3ba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.031113784Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956935031075568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7a77617-ef5e-417d-9127-3f5f3f79b3ba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.032411060Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13c2b08f-2099-47da-a40f-7838f1f9fe4d name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.032478584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13c2b08f-2099-47da-a40f-7838f1f9fe4d name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.032987349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c718c58f65081dc2e5aadcaabc17e9028807a591491ad13742b21bd4d36331b,PodSandboxId:e87602d948838dec6486139be09ec85598676d7b714cb7b0f4315fcb42a24b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724956804754864221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61da642e1b6a90457768c8a2a29f25d6d784c179cce01ab22de265bc05135898,PodSandboxId:a25f91827c1298c776d43d452fbcc51e09c3c6e8437d813e27cfb8fcf0074ce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956677815596945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d16af179676dd4557955b012625899572775e6f6ac44735c76aeddd44d3fdf,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724956676755763508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402ee13d72501a06994d139fc6a83416288cf463f25d5e0754da33b8dc4858c,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724956673762499415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99640f096bde6522770020b2249c41e85682111e94d36b9cd851593863a8ef29,PodSandboxId:60e047cd78823a50ad609bf0e147de8e256b54ee232893b23e18a36b5601fe9e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724956645211094274,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b999853b85936b403e953d43f9f09979,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:078060aad94315c04fe29c791ec93d09a6348a7fe30a8bc10a303ac96c6b4b65,PodSandboxId:1b441512c4428695d913c9f4a6d0e4801ed79c1cf1f2d727c91fc539ed988656,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724956644147256399,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:858c007f01133cbf6f7e4611b5e18e7f05cbad7f18965bf933daf93cf588cb5a,PodSandboxId:312c52b155c2894017c4d59c923caa4eec4f963377bd5795dcddaacfe652acb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956644036103274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32483de13691d4ca5512c75cbdc46a6350bdaee89c5ce7d051aed08f2629a7d6,PodSandboxId:98c171db62011a31f1e5b96fdd7ac555f5fc1756d6f17abf00c7bdb02bbf77a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724956643955850912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90d03cc166362aceb73a093f198bb289fefbf7e462ddde7900a5a80ebce98ab,PodSandboxId:91f0d779161a7bd935d92644834529cce86bfbdcf46737c08dd1c7d3bfa4016e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956640906958569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f4da722ec6c4acff782a3123b188313d8f86294083e6a37da8d9c6b02b7d4a,PodSandboxId:b3d37d9275f30d0fd89144cac3db73cc309f7cc735e96b7e24d80d8f8889f814,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724956634268028784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aff49f13b2ff05527a7ee6f76f09d89d7783ddf729d5954c6a2f37f5fcbdd7,PodSandboxId:e87602d948838dec6486139be09ec85598676d7b714cb7b0f4315fcb42a24b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724956633504160407,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41eb
ca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5fd38c54d3b71920e140657150875973b66dcab9493bd836fc64ab5cb4ebd6,PodSandboxId:15ef2a08cc118654fb489a2c672412210856b02d25a475acc47023b724dee08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724956633453755653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c
492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f72905aa04f170f74e67c3788c00d12a114862e321da6fe526e6d46be461c5a,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724956633400000614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd6df1c8c5b3e8bf24de8ef655497a92c5bff062a43d01179a4d86f7a2347c8,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724956633255170031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724956137320713034,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999481973684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999444863415,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724955987639328499,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724955984165847613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724955973084150044,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724955973067497563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13c2b08f-2099-47da-a40f-7838f1f9fe4d name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.078287984Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40b3df64-76b4-4e9e-a4d2-1c0bc3a1d854 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.078720074Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40b3df64-76b4-4e9e-a4d2-1c0bc3a1d854 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.080343939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff236162-39f9-45af-9c98-b2a28dd589ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.080954822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956935080930635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff236162-39f9-45af-9c98-b2a28dd589ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.081759855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=817805da-b69f-49bc-8cc1-295478a76ecd name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.082178471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=817805da-b69f-49bc-8cc1-295478a76ecd name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.082947276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c718c58f65081dc2e5aadcaabc17e9028807a591491ad13742b21bd4d36331b,PodSandboxId:e87602d948838dec6486139be09ec85598676d7b714cb7b0f4315fcb42a24b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724956804754864221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61da642e1b6a90457768c8a2a29f25d6d784c179cce01ab22de265bc05135898,PodSandboxId:a25f91827c1298c776d43d452fbcc51e09c3c6e8437d813e27cfb8fcf0074ce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956677815596945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d16af179676dd4557955b012625899572775e6f6ac44735c76aeddd44d3fdf,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724956676755763508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402ee13d72501a06994d139fc6a83416288cf463f25d5e0754da33b8dc4858c,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724956673762499415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99640f096bde6522770020b2249c41e85682111e94d36b9cd851593863a8ef29,PodSandboxId:60e047cd78823a50ad609bf0e147de8e256b54ee232893b23e18a36b5601fe9e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724956645211094274,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b999853b85936b403e953d43f9f09979,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:078060aad94315c04fe29c791ec93d09a6348a7fe30a8bc10a303ac96c6b4b65,PodSandboxId:1b441512c4428695d913c9f4a6d0e4801ed79c1cf1f2d727c91fc539ed988656,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724956644147256399,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:858c007f01133cbf6f7e4611b5e18e7f05cbad7f18965bf933daf93cf588cb5a,PodSandboxId:312c52b155c2894017c4d59c923caa4eec4f963377bd5795dcddaacfe652acb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956644036103274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32483de13691d4ca5512c75cbdc46a6350bdaee89c5ce7d051aed08f2629a7d6,PodSandboxId:98c171db62011a31f1e5b96fdd7ac555f5fc1756d6f17abf00c7bdb02bbf77a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724956643955850912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90d03cc166362aceb73a093f198bb289fefbf7e462ddde7900a5a80ebce98ab,PodSandboxId:91f0d779161a7bd935d92644834529cce86bfbdcf46737c08dd1c7d3bfa4016e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956640906958569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f4da722ec6c4acff782a3123b188313d8f86294083e6a37da8d9c6b02b7d4a,PodSandboxId:b3d37d9275f30d0fd89144cac3db73cc309f7cc735e96b7e24d80d8f8889f814,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724956634268028784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aff49f13b2ff05527a7ee6f76f09d89d7783ddf729d5954c6a2f37f5fcbdd7,PodSandboxId:e87602d948838dec6486139be09ec85598676d7b714cb7b0f4315fcb42a24b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724956633504160407,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41eb
ca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5fd38c54d3b71920e140657150875973b66dcab9493bd836fc64ab5cb4ebd6,PodSandboxId:15ef2a08cc118654fb489a2c672412210856b02d25a475acc47023b724dee08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724956633453755653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c
492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f72905aa04f170f74e67c3788c00d12a114862e321da6fe526e6d46be461c5a,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724956633400000614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd6df1c8c5b3e8bf24de8ef655497a92c5bff062a43d01179a4d86f7a2347c8,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724956633255170031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724956137320713034,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999481973684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999444863415,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724955987639328499,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724955984165847613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724955973084150044,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724955973067497563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=817805da-b69f-49bc-8cc1-295478a76ecd name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.133938227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06dd7fdf-abd6-4f48-bfb2-88b9815246c5 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.134051617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06dd7fdf-abd6-4f48-bfb2-88b9815246c5 name=/runtime.v1.RuntimeService/Version
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.135229600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1b525a1-146a-435a-b4dc-a9c07eedcfe9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.135710304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956935135681520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1b525a1-146a-435a-b4dc-a9c07eedcfe9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.136486565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6dd6a195-2671-441a-8197-fe197aa69b12 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.136557966Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6dd6a195-2671-441a-8197-fe197aa69b12 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 18:42:15 ha-782425 crio[3720]: time="2024-08-29 18:42:15.137017751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7c718c58f65081dc2e5aadcaabc17e9028807a591491ad13742b21bd4d36331b,PodSandboxId:e87602d948838dec6486139be09ec85598676d7b714cb7b0f4315fcb42a24b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724956804754864221,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41ebca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61da642e1b6a90457768c8a2a29f25d6d784c179cce01ab22de265bc05135898,PodSandboxId:a25f91827c1298c776d43d452fbcc51e09c3c6e8437d813e27cfb8fcf0074ce4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724956677815596945,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09d16af179676dd4557955b012625899572775e6f6ac44735c76aeddd44d3fdf,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724956676755763508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e402ee13d72501a06994d139fc6a83416288cf463f25d5e0754da33b8dc4858c,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724956673762499415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99640f096bde6522770020b2249c41e85682111e94d36b9cd851593863a8ef29,PodSandboxId:60e047cd78823a50ad609bf0e147de8e256b54ee232893b23e18a36b5601fe9e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724956645211094274,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b999853b85936b403e953d43f9f09979,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:078060aad94315c04fe29c791ec93d09a6348a7fe30a8bc10a303ac96c6b4b65,PodSandboxId:1b441512c4428695d913c9f4a6d0e4801ed79c1cf1f2d727c91fc539ed988656,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724956644147256399,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:858c007f01133cbf6f7e4611b5e18e7f05cbad7f18965bf933daf93cf588cb5a,PodSandboxId:312c52b155c2894017c4d59c923caa4eec4f963377bd5795dcddaacfe652acb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956644036103274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32483de13691d4ca5512c75cbdc46a6350bdaee89c5ce7d051aed08f2629a7d6,PodSandboxId:98c171db62011a31f1e5b96fdd7ac555f5fc1756d6f17abf00c7bdb02bbf77a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724956643955850912,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90d03cc166362aceb73a093f198bb289fefbf7e462ddde7900a5a80ebce98ab,PodSandboxId:91f0d779161a7bd935d92644834529cce86bfbdcf46737c08dd1c7d3bfa4016e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724956640906958569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f4da722ec6c4acff782a3123b188313d8f86294083e6a37da8d9c6b02b7d4a,PodSandboxId:b3d37d9275f30d0fd89144cac3db73cc309f7cc735e96b7e24d80d8f8889f814,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724956634268028784,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31aff49f13b2ff05527a7ee6f76f09d89d7783ddf729d5954c6a2f37f5fcbdd7,PodSandboxId:e87602d948838dec6486139be09ec85598676d7b714cb7b0f4315fcb42a24b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724956633504160407,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41eb
ca1-035e-44b0-96a2-3aa1e794bc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f5fd38c54d3b71920e140657150875973b66dcab9493bd836fc64ab5cb4ebd6,PodSandboxId:15ef2a08cc118654fb489a2c672412210856b02d25a475acc47023b724dee08f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724956633453755653,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c
492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f72905aa04f170f74e67c3788c00d12a114862e321da6fe526e6d46be461c5a,PodSandboxId:26532fd1cb3c090838188d7d42c9ee13c9003f247abd16bc846a28051f28a729,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724956633400000614,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c0cf445bd78f47e6e7fbbeb486ff4de,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edd6df1c8c5b3e8bf24de8ef655497a92c5bff062a43d01179a4d86f7a2347c8,PodSandboxId:d49e72306e840ef1f1d333ca05f33ea92af64626e669b37fde568d1b027b3af7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724956633255170031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfcf75b2b14a72ac0b886c83206e03cf,},Anno
tations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37662e4a563b6a6f22cea44be68d0a0d4606350ce482e76df7d44688014f7fd7,PodSandboxId:3fd1be2d5c6052ca4e58700bbb2523f0d9d3c686ab4b7cbf37aa88f3a9bd4fbe,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724956137320713034,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vwgrt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e10fff1-6582-4f04-a07b-bd664457f72d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902,PodSandboxId:21f825f2fab4d562c86b395e7f107344508edede0f2aaa95f1cfadaf92c57458,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999481973684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qhxm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 286ec4e7-9401-4bdd-b8b2-86f00f130fc2,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c,PodSandboxId:a3d59948e98ac13b33db9dbe99fac74c3fbe8c2e42bb95e967516965d8891974,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724955999444863415,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-nw2x2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab54ce43-4bd7-43ff-aad9-5cac2beb035b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c,PodSandboxId:a4dea5e1c4a599a3f737ad79f9cee1976d37b9961e37616473e93d2f8a064a33,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724955987639328499,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-7l5kn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a9ac71b-acaf-4ac9-b330-943525137d23,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d,PodSandboxId:b589b425f1e05f85147d8e24ce5ae83edb94186cf4ccd7b27adbf4bf136807f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724955984165847613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d5kbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9033b7fd-0da5-4558-8c52-0ba06a7a4704,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240,PodSandboxId:6bd7384dc0e18d9e891053a5546d14b9036c6c969c622209112e9319df3d4733,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724955973084150044,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf4f911b63406e25f415895b8739c1,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7,PodSandboxId:8f3aec69eb919f01b2a3ff5b20c4c9e8d640bf26906a94fdad10648e95bc721b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724955973067497563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-782425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 551caf35234a7eb1c2260c492e064b1e,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6dd6a195-2671-441a-8197-fe197aa69b12 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7c718c58f6508       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       6                   e87602d948838       storage-provisioner
	61da642e1b6a9       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   a25f91827c129       busybox-7dff88458-vwgrt
	09d16af179676       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   26532fd1cb3c0       kube-apiserver-ha-782425
	e402ee13d7250       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   2                   d49e72306e840       kube-controller-manager-ha-782425
	99640f096bde6       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   60e047cd78823       kube-vip-ha-782425
	078060aad9431       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   1b441512c4428       kindnet-7l5kn
	858c007f01133       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   312c52b155c28       coredns-6f6b679f8f-qhxm5
	32483de13691d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   98c171db62011       kube-proxy-d5kbx
	d90d03cc16636       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   91f0d779161a7       coredns-6f6b679f8f-nw2x2
	55f4da722ec6c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   b3d37d9275f30       etcd-ha-782425
	31aff49f13b2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       5                   e87602d948838       storage-provisioner
	3f5fd38c54d3b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   15ef2a08cc118       kube-scheduler-ha-782425
	8f72905aa04f1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   26532fd1cb3c0       kube-apiserver-ha-782425
	edd6df1c8c5b3       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   1                   d49e72306e840       kube-controller-manager-ha-782425
	37662e4a563b6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   3fd1be2d5c605       busybox-7dff88458-vwgrt
	409d0bb5b6b40       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   21f825f2fab4d       coredns-6f6b679f8f-qhxm5
	4bd32029a6efc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   a3d59948e98ac       coredns-6f6b679f8f-nw2x2
	23aa351e7d2aa       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    15 minutes ago      Exited              kindnet-cni               0                   a4dea5e1c4a59       kindnet-7l5kn
	2b337a7249ae2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      15 minutes ago      Exited              kube-proxy                0                   b589b425f1e05       kube-proxy-d5kbx
	5077da1dd8cc1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   6bd7384dc0e18       etcd-ha-782425
	a97655078532a       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      16 minutes ago      Exited              kube-scheduler            0                   8f3aec69eb919       kube-scheduler-ha-782425
	
	
	==> coredns [409d0bb5b6b40cd069dafaa4568ef939f18a1c79bdc85fdde26c4287d93ed902] <==
	[INFO] 10.244.1.2:58950 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398175s
	[INFO] 10.244.1.2:44242 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081199s
	[INFO] 10.244.1.2:34411 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000240374s
	[INFO] 10.244.0.4:53126 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090758s
	[INFO] 10.244.0.4:52901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119888s
	[INFO] 10.244.0.4:37257 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017821s
	[INFO] 10.244.0.4:52278 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000240335s
	[INFO] 10.244.2.2:51997 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116371s
	[INFO] 10.244.2.2:50462 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000182689s
	[INFO] 10.244.1.2:35790 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065854s
	[INFO] 10.244.0.4:56280 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165741s
	[INFO] 10.244.2.2:45436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113865s
	[INFO] 10.244.2.2:34308 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000419163s
	[INFO] 10.244.2.2:49859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112498s
	[INFO] 10.244.1.2:38106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212429s
	[INFO] 10.244.1.2:54743 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163094s
	[INFO] 10.244.1.2:54398 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014924s
	[INFO] 10.244.1.2:38833 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103377s
	[INFO] 10.244.0.4:55589 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206346s
	[INFO] 10.244.0.4:55224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098455s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4bd32029a6efc0d3867397fb2b9cfec3c36391b473c6d4fb2708e08dac9bc15c] <==
	[INFO] 10.244.2.2:37045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125236s
	[INFO] 10.244.2.2:51775 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196255s
	[INFO] 10.244.2.2:37371 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123702s
	[INFO] 10.244.2.2:59027 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137207s
	[INFO] 10.244.1.2:42349 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121881s
	[INFO] 10.244.1.2:55845 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147799s
	[INFO] 10.244.1.2:50054 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077465s
	[INFO] 10.244.0.4:37394 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001939796s
	[INFO] 10.244.0.4:39167 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001349918s
	[INFO] 10.244.0.4:55247 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192001s
	[INFO] 10.244.0.4:50279 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056293s
	[INFO] 10.244.2.2:57566 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010586s
	[INFO] 10.244.2.2:59408 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079146s
	[INFO] 10.244.1.2:58697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125072s
	[INFO] 10.244.1.2:39849 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011783s
	[INFO] 10.244.1.2:34464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086829s
	[INFO] 10.244.0.4:40575 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123993s
	[INFO] 10.244.0.4:53854 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077061s
	[INFO] 10.244.0.4:35333 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069139s
	[INFO] 10.244.2.2:47493 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133201s
	[INFO] 10.244.0.4:46944 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105838s
	[INFO] 10.244.0.4:56535 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148137s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1817&timeout=8m30s&timeoutSeconds=510&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [858c007f01133cbf6f7e4611b5e18e7f05cbad7f18965bf933daf93cf588cb5a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:51086->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1309394999]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Aug-2024 18:37:24.235) (total time: 12085ms):
	Trace[1309394999]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:51086->10.96.0.1:443: read: connection reset by peer 12084ms (18:37:36.320)
	Trace[1309394999]: [12.085537139s] [12.085537139s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:51086->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d90d03cc166362aceb73a093f198bb289fefbf7e462ddde7900a5a80ebce98ab] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47046->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1152282068]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Aug-2024 18:37:25.417) (total time: 10904ms):
	Trace[1152282068]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47046->10.96.0.1:443: read: connection reset by peer 10904ms (18:37:36.321)
	Trace[1152282068]: [10.904550935s] [10.904550935s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:47046->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-782425
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_26_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:26:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:42:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:38:12 +0000   Thu, 29 Aug 2024 18:26:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:38:12 +0000   Thu, 29 Aug 2024 18:26:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:38:12 +0000   Thu, 29 Aug 2024 18:26:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:38:12 +0000   Thu, 29 Aug 2024 18:26:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-782425
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 44ba55866afc4f4897f7d5cbfc46f2df
	  System UUID:                44ba5586-6afc-4f48-97f7-d5cbfc46f2df
	  Boot ID:                    e2df80f3-fc71-40f7-9f6a-86fc01e04fd1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vwgrt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-6f6b679f8f-nw2x2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-qhxm5             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-782425                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-7l5kn                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-782425             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-782425    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-d5kbx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-782425             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-782425                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 4m8s                  kube-proxy       
	  Normal   Starting                 15m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                   kubelet          Node ha-782425 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                   kubelet          Node ha-782425 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                   kubelet          Node ha-782425 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                   node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Normal   NodeReady                15m                   kubelet          Node ha-782425 status is now: NodeReady
	  Normal   RegisteredNode           14m                   node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Normal   RegisteredNode           13m                   node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Warning  ContainerGCFailed        5m54s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m17s (x3 over 6m6s)  kubelet          Node ha-782425 status is now: NodeNotReady
	  Normal   RegisteredNode           4m17s                 node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Normal   RegisteredNode           4m13s                 node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	  Normal   RegisteredNode           3m16s                 node-controller  Node ha-782425 event: Registered Node ha-782425 in Controller
	
	
	Name:               ha-782425-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T18_27_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:27:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:42:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:38:45 +0000   Thu, 29 Aug 2024 18:38:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:38:45 +0000   Thu, 29 Aug 2024 18:38:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:38:45 +0000   Thu, 29 Aug 2024 18:38:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:38:45 +0000   Thu, 29 Aug 2024 18:38:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-782425-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a438bc2a769444e18345ad0f28ed5c33
	  System UUID:                a438bc2a-7694-44e1-8345-ad0f28ed5c33
	  Boot ID:                    f25faddc-b228-438e-8537-3bf15302de5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rsqqv                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-782425-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-kw2zk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-782425-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-782425-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-5k8xr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-782425-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-782425-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m11s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-782425-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-782425-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-782425-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-782425-m02 status is now: NodeNotReady
	  Normal  Starting                 4m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m39s (x8 over 4m39s)  kubelet          Node ha-782425-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m39s (x8 over 4m39s)  kubelet          Node ha-782425-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m39s (x7 over 4m39s)  kubelet          Node ha-782425-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-782425-m02 event: Registered Node ha-782425-m02 in Controller
	
	
	Name:               ha-782425-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-782425-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=ha-782425
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T18_29_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:29:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-782425-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:39:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 29 Aug 2024 18:39:29 +0000   Thu, 29 Aug 2024 18:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 29 Aug 2024 18:39:29 +0000   Thu, 29 Aug 2024 18:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 29 Aug 2024 18:39:29 +0000   Thu, 29 Aug 2024 18:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 29 Aug 2024 18:39:29 +0000   Thu, 29 Aug 2024 18:40:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    ha-782425-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1d73c2cadaf4d3cb7d9a4d8e585f4dc
	  System UUID:                a1d73c2c-adaf-4d3c-b7d9-a4d8e585f4dc
	  Boot ID:                    e6b4e447-7857-4236-85e1-a47f00bda6d5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6cmpc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kindnet-lbjt6              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-5xgbn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m42s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-782425-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-782425-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-782425-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-782425-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m17s                  node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal   RegisteredNode           4m13s                  node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal   NodeNotReady             3m37s                  node-controller  Node ha-782425-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-782425-m04 event: Registered Node ha-782425-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m46s (x3 over 2m46s)  kubelet          Node ha-782425-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x3 over 2m46s)  kubelet          Node ha-782425-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x3 over 2m46s)  kubelet          Node ha-782425-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m46s (x2 over 2m46s)  kubelet          Node ha-782425-m04 has been rebooted, boot id: e6b4e447-7857-4236-85e1-a47f00bda6d5
	  Normal   NodeReady                2m46s (x2 over 2m46s)  kubelet          Node ha-782425-m04 status is now: NodeReady
	  Normal   NodeNotReady             103s                   node-controller  Node ha-782425-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.056184] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054002] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.164673] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.149154] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.266975] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +3.780708] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +4.381995] systemd-fstab-generator[891]: Ignoring "noauto" option for root device
	[  +0.060319] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.240176] kauditd_printk_skb: 74 callbacks suppressed
	[  +3.218514] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +2.447866] kauditd_printk_skb: 26 callbacks suppressed
	[ +15.454195] kauditd_printk_skb: 38 callbacks suppressed
	[Aug29 18:27] kauditd_printk_skb: 24 callbacks suppressed
	[Aug29 18:34] kauditd_printk_skb: 1 callbacks suppressed
	[Aug29 18:37] systemd-fstab-generator[3643]: Ignoring "noauto" option for root device
	[  +0.157201] systemd-fstab-generator[3655]: Ignoring "noauto" option for root device
	[  +0.171951] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.141238] systemd-fstab-generator[3682]: Ignoring "noauto" option for root device
	[  +0.279432] systemd-fstab-generator[3710]: Ignoring "noauto" option for root device
	[  +4.392282] systemd-fstab-generator[3808]: Ignoring "noauto" option for root device
	[  +0.089317] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.574001] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.706987] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.418678] kauditd_printk_skb: 30 callbacks suppressed
	[Aug29 18:38] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [5077da1dd8cc1e3f783eb2c02cd3c99009691d54e129a36144cf71f564f50240] <==
	2024/08/29 18:35:35 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-29T18:35:36.135918Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":13514618170561217261,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-29T18:35:36.248254Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.39:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T18:35:36.248443Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.39:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-29T18:35:36.248587Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"38979a8318efbb8d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-29T18:35:36.248845Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.248897Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.248921Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.248956Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.249015Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.249071Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38979a8318efbb8d","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.249083Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"61b653af6f1344a5"}
	{"level":"info","ts":"2024-08-29T18:35:36.249089Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.249097Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.249129Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.249164Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.249210Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.249237Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.249260Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:35:36.252063Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.39:2380"}
	{"level":"warn","ts":"2024-08-29T18:35:36.252064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.120043688s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-29T18:35:36.252203Z","caller":"traceutil/trace.go:171","msg":"trace[1221347184] range","detail":"{range_begin:; range_end:; }","duration":"9.120196754s","start":"2024-08-29T18:35:27.131997Z","end":"2024-08-29T18:35:36.252194Z","steps":["trace[1221347184] 'agreement among raft nodes before linearized reading'  (duration: 9.120041599s)"],"step_count":1}
	{"level":"error","ts":"2024-08-29T18:35:36.252247Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-29T18:35:36.252359Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.39:2380"}
	{"level":"info","ts":"2024-08-29T18:35:36.252388Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-782425","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.39:2380"],"advertise-client-urls":["https://192.168.39.39:2379"]}
	
	
	==> etcd [55f4da722ec6c4acff782a3123b188313d8f86294083e6a37da8d9c6b02b7d4a] <==
	{"level":"info","ts":"2024-08-29T18:38:53.604436Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:38:53.609055Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:38:53.617923Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"38979a8318efbb8d","to":"1ede913032f684f1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-29T18:38:53.617972Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:38:53.622877Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"38979a8318efbb8d","to":"1ede913032f684f1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-29T18:38:53.623048Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:39:04.962210Z","caller":"traceutil/trace.go:171","msg":"trace[357842179] transaction","detail":"{read_only:false; response_revision:2419; number_of_response:1; }","duration":"108.88505ms","start":"2024-08-29T18:39:04.853296Z","end":"2024-08-29T18:39:04.962181Z","steps":["trace[357842179] 'process raft request'  (duration: 108.753483ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:39:42.155529Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.220:51630","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-29T18:39:42.177290Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38979a8318efbb8d switched to configuration voters=(4077897875457031053 7040907080388265125)"}
	{"level":"info","ts":"2024-08-29T18:39:42.179501Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"9d46469dd2e6eab1","local-member-id":"38979a8318efbb8d","removed-remote-peer-id":"1ede913032f684f1","removed-remote-peer-urls":["https://192.168.39.220:2380"]}
	{"level":"info","ts":"2024-08-29T18:39:42.179613Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1ede913032f684f1"}
	{"level":"warn","ts":"2024-08-29T18:39:42.179946Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:39:42.180009Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1ede913032f684f1"}
	{"level":"warn","ts":"2024-08-29T18:39:42.180256Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:39:42.180307Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:39:42.180518Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"warn","ts":"2024-08-29T18:39:42.180754Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1","error":"context canceled"}
	{"level":"warn","ts":"2024-08-29T18:39:42.181026Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"1ede913032f684f1","error":"failed to read 1ede913032f684f1 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-29T18:39:42.181106Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"warn","ts":"2024-08-29T18:39:42.181263Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1","error":"context canceled"}
	{"level":"info","ts":"2024-08-29T18:39:42.181332Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38979a8318efbb8d","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:39:42.181389Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1ede913032f684f1"}
	{"level":"info","ts":"2024-08-29T18:39:42.181427Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"38979a8318efbb8d","removed-remote-peer-id":"1ede913032f684f1"}
	{"level":"warn","ts":"2024-08-29T18:39:42.201878Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"38979a8318efbb8d","remote-peer-id-stream-handler":"38979a8318efbb8d","remote-peer-id-from":"1ede913032f684f1"}
	{"level":"warn","ts":"2024-08-29T18:39:42.203082Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.220:46874","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:42:15 up 16 min,  0 users,  load average: 0.50, 0.51, 0.40
	Linux ha-782425 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [078060aad94315c04fe29c791ec93d09a6348a7fe30a8bc10a303ac96c6b4b65] <==
	I0829 18:41:35.199896       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:41:45.197386       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:41:45.197425       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:41:45.197638       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:41:45.197677       1 main.go:299] handling current node
	I0829 18:41:45.197690       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:41:45.197696       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:41:55.200930       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:41:55.201088       1 main.go:299] handling current node
	I0829 18:41:55.201135       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:41:55.201155       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:41:55.201347       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:41:55.201375       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:42:05.200904       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:42:05.200944       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:42:05.201103       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:42:05.201126       1 main.go:299] handling current node
	I0829 18:42:05.201158       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:42:05.201164       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:42:15.193901       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:42:15.193959       1 main.go:299] handling current node
	I0829 18:42:15.193993       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:42:15.193998       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:42:15.194163       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:42:15.194182       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [23aa351e7d2aa6047fb29ec7418eaf8cc8e3e8b5d952bd46f2f29a057851f06c] <==
	I0829 18:34:58.595206       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:35:08.603255       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:35:08.603312       1 main.go:299] handling current node
	I0829 18:35:08.603330       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:35:08.603338       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:35:08.603500       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:35:08.603517       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:35:08.603576       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:35:08.603591       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:35:18.597896       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:35:18.598049       1 main.go:299] handling current node
	I0829 18:35:18.598079       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:35:18.598097       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:35:18.598272       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:35:18.598295       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:35:18.598381       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:35:18.598400       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	I0829 18:35:28.594737       1 main.go:295] Handling node with IPs: map[192.168.39.39:{}]
	I0829 18:35:28.594835       1 main.go:299] handling current node
	I0829 18:35:28.594867       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0829 18:35:28.594875       1 main.go:322] Node ha-782425-m02 has CIDR [10.244.1.0/24] 
	I0829 18:35:28.595046       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0829 18:35:28.595066       1 main.go:322] Node ha-782425-m03 has CIDR [10.244.2.0/24] 
	I0829 18:35:28.595127       1 main.go:295] Handling node with IPs: map[192.168.39.235:{}]
	I0829 18:35:28.595144       1 main.go:322] Node ha-782425-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [09d16af179676dd4557955b012625899572775e6f6ac44735c76aeddd44d3fdf] <==
	I0829 18:37:58.738228       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0829 18:37:58.795322       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0829 18:37:58.808819       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 18:37:58.808851       1 policy_source.go:224] refreshing policies
	I0829 18:37:58.831723       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 18:37:58.831757       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 18:37:58.831816       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 18:37:58.831923       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0829 18:37:58.832189       1 shared_informer.go:320] Caches are synced for configmaps
	I0829 18:37:58.832602       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0829 18:37:58.833560       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0829 18:37:58.837042       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0829 18:37:58.839205       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0829 18:37:58.839303       1 aggregator.go:171] initial CRD sync complete...
	I0829 18:37:58.839338       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 18:37:58.839361       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 18:37:58.839383       1 cache.go:39] Caches are synced for autoregister controller
	W0829 18:37:58.843242       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.253]
	I0829 18:37:58.844720       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 18:37:58.851397       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0829 18:37:58.854296       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0829 18:37:58.892752       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0829 18:37:59.738359       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0829 18:38:00.068401       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.253 192.168.39.39]
	W0829 18:38:10.213619       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.253 192.168.39.39]
	
	
	==> kube-apiserver [8f72905aa04f170f74e67c3788c00d12a114862e321da6fe526e6d46be461c5a] <==
	I0829 18:37:13.693911       1 options.go:228] external host was not specified, using 192.168.39.39
	I0829 18:37:13.695722       1 server.go:142] Version: v1.31.0
	I0829 18:37:13.695812       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0829 18:37:14.150493       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:37:14.151215       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0829 18:37:14.152737       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0829 18:37:14.164160       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0829 18:37:14.164200       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0829 18:37:14.164454       1 instance.go:232] Using reconciler: lease
	I0829 18:37:14.164866       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0829 18:37:14.165669       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:37:34.150047       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0829 18:37:34.151199       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0829 18:37:34.165972       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [e402ee13d72501a06994d139fc6a83416288cf463f25d5e0754da33b8dc4858c] <==
	I0829 18:39:40.219178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="85.912µs"
	I0829 18:39:41.027420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.616µs"
	I0829 18:39:41.209285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.623µs"
	I0829 18:39:41.217949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.617µs"
	I0829 18:39:44.208291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.560148ms"
	I0829 18:39:44.208470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.773µs"
	I0829 18:39:53.095025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m03"
	I0829 18:39:53.095198       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-782425-m04"
	E0829 18:39:53.143292       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-782425-m03\", UID:\"1e1b2f9f-16e6-4c09-a092-cfe753cee8af\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32
{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-782425-m03\", UID:\"f41a75a6-55fd-4e13-a01f-88b08e730f67\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-782425-m03\" not found" logger="UnhandledError"
	E0829 18:40:02.380240       1 gc_controller.go:151] "Failed to get node" err="node \"ha-782425-m03\" not found" logger="pod-garbage-collector-controller" node="ha-782425-m03"
	E0829 18:40:02.380369       1 gc_controller.go:151] "Failed to get node" err="node \"ha-782425-m03\" not found" logger="pod-garbage-collector-controller" node="ha-782425-m03"
	E0829 18:40:02.380396       1 gc_controller.go:151] "Failed to get node" err="node \"ha-782425-m03\" not found" logger="pod-garbage-collector-controller" node="ha-782425-m03"
	E0829 18:40:02.380420       1 gc_controller.go:151] "Failed to get node" err="node \"ha-782425-m03\" not found" logger="pod-garbage-collector-controller" node="ha-782425-m03"
	E0829 18:40:02.380443       1 gc_controller.go:151] "Failed to get node" err="node \"ha-782425-m03\" not found" logger="pod-garbage-collector-controller" node="ha-782425-m03"
	E0829 18:40:22.381527       1 gc_controller.go:151] "Failed to get node" err="node \"ha-782425-m03\" not found" logger="pod-garbage-collector-controller" node="ha-782425-m03"
	E0829 18:40:22.381616       1 gc_controller.go:151] "Failed to get node" err="node \"ha-782425-m03\" not found" logger="pod-garbage-collector-controller" node="ha-782425-m03"
	E0829 18:40:22.381632       1 gc_controller.go:151] "Failed to get node" err="node \"ha-782425-m03\" not found" logger="pod-garbage-collector-controller" node="ha-782425-m03"
	E0829 18:40:22.381641       1 gc_controller.go:151] "Failed to get node" err="node \"ha-782425-m03\" not found" logger="pod-garbage-collector-controller" node="ha-782425-m03"
	E0829 18:40:22.381647       1 gc_controller.go:151] "Failed to get node" err="node \"ha-782425-m03\" not found" logger="pod-garbage-collector-controller" node="ha-782425-m03"
	I0829 18:40:32.464272       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:40:32.492161       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:40:32.555035       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.442785ms"
	I0829 18:40:32.555258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="89.01µs"
	I0829 18:40:33.469422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	I0829 18:40:37.598924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-782425-m04"
	
	
	==> kube-controller-manager [edd6df1c8c5b3e8bf24de8ef655497a92c5bff062a43d01179a4d86f7a2347c8] <==
	I0829 18:37:14.441942       1 serving.go:386] Generated self-signed cert in-memory
	I0829 18:37:14.777199       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0829 18:37:14.777300       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:37:14.779105       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0829 18:37:14.779924       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0829 18:37:14.780124       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 18:37:14.780243       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0829 18:37:35.171019       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.39:8443/healthz\": dial tcp 192.168.39.39:8443: connect: connection refused"
	
	
	==> kube-proxy [2b337a7249ae2ffd41055addb2ffd5d607c5cf1a816fdad3dab6c7ed2a7a716d] <==
	E0829 18:34:26.948722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:30.016332       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:30.016449       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:30.016595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:30.016633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:30.016718       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:30.016753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:36.161666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:36.161765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:36.161956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:36.162033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:36.162165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:36.162208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:45.376924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:45.377566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:48.448504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:48.449287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:34:48.449767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:34:48.449894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:35:06.881752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:35:06.882004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1861\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:35:06.882245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:35:06.882339       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1764\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0829 18:35:09.953138       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835": dial tcp 192.168.39.254:8443: connect: no route to host
	E0829 18:35:09.953197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-782425&resourceVersion=1835\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [32483de13691d4ca5512c75cbdc46a6350bdaee89c5ce7d051aed08f2629a7d6] <==
	E0829 18:37:25.121710       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-782425\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0829 18:37:28.193043       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-782425\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0829 18:37:31.265734       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-782425\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0829 18:37:37.408968       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-782425\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0829 18:37:49.697739       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-782425\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0829 18:38:07.368559       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.39"]
	E0829 18:38:07.368941       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:38:07.401141       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:38:07.401239       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:38:07.401294       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:38:07.403480       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:38:07.403940       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:38:07.404177       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:38:07.406344       1 config.go:197] "Starting service config controller"
	I0829 18:38:07.406436       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:38:07.406481       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:38:07.406508       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:38:07.408282       1 config.go:326] "Starting node config controller"
	I0829 18:38:07.408523       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:38:07.508871       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 18:38:07.508961       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:38:07.515116       1 shared_informer.go:320] Caches are synced for node config
	W0829 18:40:40.727160       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0829 18:40:40.727295       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0829 18:40:40.727352       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [3f5fd38c54d3b71920e140657150875973b66dcab9493bd836fc64ab5cb4ebd6] <==
	W0829 18:37:52.007919       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.39:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:52.008081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.39:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:52.254679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.39:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:52.254753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.39:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:52.291510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.39:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:52.291580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.39:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:53.051628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.39:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:53.051743       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.39:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:53.074744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.39:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:53.074940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.39:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:53.947460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.39:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:53.947634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.39:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:54.229020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.39:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:54.229249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.39:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:55.222984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.39:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:55.223284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.39:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:55.290880       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.39:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:55.291014       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.39:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	W0829 18:37:56.492479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.39:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.39:8443: connect: connection refused
	E0829 18:37:56.492543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.39:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.39:8443: connect: connection refused" logger="UnhandledError"
	I0829 18:38:11.185147       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0829 18:39:40.186942       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-6cmpc\": pod busybox-7dff88458-6cmpc is already assigned to node \"ha-782425-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-6cmpc" node="ha-782425-m04"
	E0829 18:39:40.187438       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e886b305-8c25-4b36-a81b-3d3b637bbeea(default/busybox-7dff88458-6cmpc) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-6cmpc"
	E0829 18:39:40.187553       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-6cmpc\": pod busybox-7dff88458-6cmpc is already assigned to node \"ha-782425-m04\"" pod="default/busybox-7dff88458-6cmpc"
	I0829 18:39:40.187764       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-6cmpc" node="ha-782425-m04"
	
	
	==> kube-scheduler [a97655078532a37498ae3c1159eb1b7f11f52ce7fec795aee509b7f2c7bd46c7] <==
	E0829 18:28:53.677627       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-h8k94\": pod busybox-7dff88458-h8k94 is already assigned to node \"ha-782425-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-h8k94" node="ha-782425-m03"
	E0829 18:28:53.677952       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-h8k94\": pod busybox-7dff88458-h8k94 is already assigned to node \"ha-782425-m03\"" pod="default/busybox-7dff88458-h8k94"
	E0829 18:28:53.695276       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vwgrt\": pod busybox-7dff88458-vwgrt is already assigned to node \"ha-782425\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vwgrt" node="ha-782425"
	E0829 18:28:53.695376       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0e10fff1-6582-4f04-a07b-bd664457f72d(default/busybox-7dff88458-vwgrt) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-vwgrt"
	E0829 18:28:53.695398       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vwgrt\": pod busybox-7dff88458-vwgrt is already assigned to node \"ha-782425\"" pod="default/busybox-7dff88458-vwgrt"
	I0829 18:28:53.695418       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-vwgrt" node="ha-782425"
	E0829 18:29:31.044983       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lbjt6\": pod kindnet-lbjt6 is already assigned to node \"ha-782425-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-lbjt6" node="ha-782425-m04"
	E0829 18:29:31.045106       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ee67d98e-b169-415c-ac85-e253e2888144(kube-system/kindnet-lbjt6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lbjt6"
	E0829 18:29:31.045132       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lbjt6\": pod kindnet-lbjt6 is already assigned to node \"ha-782425-m04\"" pod="kube-system/kindnet-lbjt6"
	I0829 18:29:31.045177       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lbjt6" node="ha-782425-m04"
	E0829 18:29:31.045921       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5xgbn\": pod kube-proxy-5xgbn is already assigned to node \"ha-782425-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5xgbn" node="ha-782425-m04"
	E0829 18:29:31.045987       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 278c58ce-3b1f-45c5-a1c9-0d2ce710f092(kube-system/kube-proxy-5xgbn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5xgbn"
	E0829 18:29:31.046008       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5xgbn\": pod kube-proxy-5xgbn is already assigned to node \"ha-782425-m04\"" pod="kube-system/kube-proxy-5xgbn"
	I0829 18:29:31.046027       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5xgbn" node="ha-782425-m04"
	E0829 18:35:27.567323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0829 18:35:28.202485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0829 18:35:30.066852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0829 18:35:32.286517       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0829 18:35:32.775523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0829 18:35:33.121047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0829 18:35:34.530054       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0829 18:35:34.982846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0829 18:35:35.077668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0829 18:35:35.901023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0829 18:35:35.939664       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 29 18:40:41 ha-782425 kubelet[1321]: E0829 18:40:41.984618    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956841983098378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:40:41 ha-782425 kubelet[1321]: E0829 18:40:41.984712    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956841983098378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:40:51 ha-782425 kubelet[1321]: E0829 18:40:51.990336    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956851986550411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:40:51 ha-782425 kubelet[1321]: E0829 18:40:51.990367    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956851986550411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:41:01 ha-782425 kubelet[1321]: E0829 18:41:01.993691    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956861992641593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:41:01 ha-782425 kubelet[1321]: E0829 18:41:01.994339    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956861992641593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:41:11 ha-782425 kubelet[1321]: E0829 18:41:11.996706    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956871996403787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:41:11 ha-782425 kubelet[1321]: E0829 18:41:11.996756    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956871996403787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:41:21 ha-782425 kubelet[1321]: E0829 18:41:21.760997    1321 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 18:41:21 ha-782425 kubelet[1321]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 18:41:21 ha-782425 kubelet[1321]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 18:41:21 ha-782425 kubelet[1321]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 18:41:21 ha-782425 kubelet[1321]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 18:41:21 ha-782425 kubelet[1321]: E0829 18:41:21.998517    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956881998060492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:41:21 ha-782425 kubelet[1321]: E0829 18:41:21.998547    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956881998060492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:41:32 ha-782425 kubelet[1321]: E0829 18:41:32.000635    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956892000209926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:41:32 ha-782425 kubelet[1321]: E0829 18:41:32.000671    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956892000209926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:41:42 ha-782425 kubelet[1321]: E0829 18:41:42.002992    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956902002248400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:41:42 ha-782425 kubelet[1321]: E0829 18:41:42.003034    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956902002248400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:41:52 ha-782425 kubelet[1321]: E0829 18:41:52.004820    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956912004343927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:41:52 ha-782425 kubelet[1321]: E0829 18:41:52.005307    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956912004343927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:42:02 ha-782425 kubelet[1321]: E0829 18:42:02.007232    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956922006847172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:42:02 ha-782425 kubelet[1321]: E0829 18:42:02.007527    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956922006847172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:42:12 ha-782425 kubelet[1321]: E0829 18:42:12.009406    1321 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956932008925108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 18:42:12 ha-782425 kubelet[1321]: E0829 18:42:12.009450    1321 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724956932008925108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 18:42:14.687384   40488 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19531-13056/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-782425 -n ha-782425
helpers_test.go:261: (dbg) Run:  kubectl --context ha-782425 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (330.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-922931
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-922931
E0829 18:58:26.706829   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-922931: exit status 82 (2m1.745890549s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-922931-m03"  ...
	* Stopping node "multinode-922931-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-922931" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-922931 --wait=true -v=8 --alsologtostderr
E0829 18:59:49.633804   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:02:52.698812   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-922931 --wait=true -v=8 --alsologtostderr: (3m26.191007043s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-922931
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-922931 -n multinode-922931
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-922931 logs -n 25: (1.377727811s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp multinode-922931-m02:/home/docker/cp-test.txt                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2825152660/001/cp-test_multinode-922931-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp multinode-922931-m02:/home/docker/cp-test.txt                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931:/home/docker/cp-test_multinode-922931-m02_multinode-922931.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n multinode-922931 sudo cat                                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | /home/docker/cp-test_multinode-922931-m02_multinode-922931.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp multinode-922931-m02:/home/docker/cp-test.txt                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m03:/home/docker/cp-test_multinode-922931-m02_multinode-922931-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n multinode-922931-m03 sudo cat                                   | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | /home/docker/cp-test_multinode-922931-m02_multinode-922931-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp testdata/cp-test.txt                                                | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp multinode-922931-m03:/home/docker/cp-test.txt                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2825152660/001/cp-test_multinode-922931-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp multinode-922931-m03:/home/docker/cp-test.txt                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931:/home/docker/cp-test_multinode-922931-m03_multinode-922931.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n multinode-922931 sudo cat                                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | /home/docker/cp-test_multinode-922931-m03_multinode-922931.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp multinode-922931-m03:/home/docker/cp-test.txt                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m02:/home/docker/cp-test_multinode-922931-m03_multinode-922931-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n multinode-922931-m02 sudo cat                                   | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | /home/docker/cp-test_multinode-922931-m03_multinode-922931-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-922931 node stop m03                                                          | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	| node    | multinode-922931 node start                                                             | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-922931                                                                | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:57 UTC |                     |
	| stop    | -p multinode-922931                                                                     | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:57 UTC |                     |
	| start   | -p multinode-922931                                                                     | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:59 UTC | 29 Aug 24 19:03 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-922931                                                                | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 19:03 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:59:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:59:37.054756   50033 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:59:37.054892   50033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:59:37.054902   50033 out.go:358] Setting ErrFile to fd 2...
	I0829 18:59:37.054909   50033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:59:37.055116   50033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:59:37.055712   50033 out.go:352] Setting JSON to false
	I0829 18:59:37.056598   50033 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6124,"bootTime":1724951853,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:59:37.056662   50033 start.go:139] virtualization: kvm guest
	I0829 18:59:37.058937   50033 out.go:177] * [multinode-922931] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:59:37.060091   50033 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:59:37.060092   50033 notify.go:220] Checking for updates...
	I0829 18:59:37.062276   50033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:59:37.063551   50033 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:59:37.064784   50033 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:59:37.066154   50033 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:59:37.067604   50033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:59:37.069353   50033 config.go:182] Loaded profile config "multinode-922931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:59:37.069474   50033 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:59:37.069930   50033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:59:37.069978   50033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:59:37.087206   50033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I0829 18:59:37.087781   50033 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:59:37.088528   50033 main.go:141] libmachine: Using API Version  1
	I0829 18:59:37.088555   50033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:59:37.088945   50033 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:59:37.089119   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 18:59:37.125606   50033 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 18:59:37.126913   50033 start.go:297] selected driver: kvm2
	I0829 18:59:37.126926   50033 start.go:901] validating driver "kvm2" against &{Name:multinode-922931 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-922931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.226 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:59:37.127111   50033 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:59:37.127470   50033 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:59:37.127550   50033 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:59:37.142847   50033 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:59:37.143817   50033 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:59:37.143900   50033 cni.go:84] Creating CNI manager for ""
	I0829 18:59:37.143916   50033 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0829 18:59:37.143996   50033 start.go:340] cluster config:
	{Name:multinode-922931 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-922931 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.226 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:59:37.144176   50033 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:59:37.146960   50033 out.go:177] * Starting "multinode-922931" primary control-plane node in "multinode-922931" cluster
	I0829 18:59:37.148341   50033 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:59:37.148379   50033 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:59:37.148391   50033 cache.go:56] Caching tarball of preloaded images
	I0829 18:59:37.148456   50033 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:59:37.148470   50033 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:59:37.148610   50033 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/config.json ...
	I0829 18:59:37.148836   50033 start.go:360] acquireMachinesLock for multinode-922931: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:59:37.148880   50033 start.go:364] duration metric: took 25.938µs to acquireMachinesLock for "multinode-922931"
	I0829 18:59:37.148905   50033 start.go:96] Skipping create...Using existing machine configuration
	I0829 18:59:37.148916   50033 fix.go:54] fixHost starting: 
	I0829 18:59:37.149190   50033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:59:37.149224   50033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:59:37.163346   50033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
	I0829 18:59:37.163775   50033 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:59:37.164214   50033 main.go:141] libmachine: Using API Version  1
	I0829 18:59:37.164231   50033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:59:37.164621   50033 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:59:37.164818   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 18:59:37.164978   50033 main.go:141] libmachine: (multinode-922931) Calling .GetState
	I0829 18:59:37.166673   50033 fix.go:112] recreateIfNeeded on multinode-922931: state=Running err=<nil>
	W0829 18:59:37.166695   50033 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 18:59:37.169527   50033 out.go:177] * Updating the running kvm2 "multinode-922931" VM ...
	I0829 18:59:37.170874   50033 machine.go:93] provisionDockerMachine start ...
	I0829 18:59:37.170890   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 18:59:37.171074   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:59:37.173497   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.173949   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.173979   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.174077   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 18:59:37.174247   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.174417   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.174559   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 18:59:37.174741   50033 main.go:141] libmachine: Using SSH client type: native
	I0829 18:59:37.175024   50033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0829 18:59:37.175041   50033 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 18:59:37.286982   50033 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-922931
	
	I0829 18:59:37.287015   50033 main.go:141] libmachine: (multinode-922931) Calling .GetMachineName
	I0829 18:59:37.287234   50033 buildroot.go:166] provisioning hostname "multinode-922931"
	I0829 18:59:37.287256   50033 main.go:141] libmachine: (multinode-922931) Calling .GetMachineName
	I0829 18:59:37.287454   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:59:37.290166   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.290526   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.290563   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.290658   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 18:59:37.290840   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.290979   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.291087   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 18:59:37.291247   50033 main.go:141] libmachine: Using SSH client type: native
	I0829 18:59:37.291414   50033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0829 18:59:37.291430   50033 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-922931 && echo "multinode-922931" | sudo tee /etc/hostname
	I0829 18:59:37.413402   50033 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-922931
	
	I0829 18:59:37.413432   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:59:37.416286   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.416722   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.416749   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.416941   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 18:59:37.417144   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.417292   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.417396   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 18:59:37.417541   50033 main.go:141] libmachine: Using SSH client type: native
	I0829 18:59:37.417728   50033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0829 18:59:37.417746   50033 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-922931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-922931/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-922931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:59:37.526697   50033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:59:37.526729   50033 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:59:37.526768   50033 buildroot.go:174] setting up certificates
	I0829 18:59:37.526784   50033 provision.go:84] configureAuth start
	I0829 18:59:37.526804   50033 main.go:141] libmachine: (multinode-922931) Calling .GetMachineName
	I0829 18:59:37.527079   50033 main.go:141] libmachine: (multinode-922931) Calling .GetIP
	I0829 18:59:37.529995   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.530386   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.530412   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.530562   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:59:37.532953   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.533328   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.533382   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.533438   50033 provision.go:143] copyHostCerts
	I0829 18:59:37.533481   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:59:37.533514   50033 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 18:59:37.533531   50033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:59:37.533623   50033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:59:37.533719   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:59:37.533739   50033 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 18:59:37.533743   50033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:59:37.533768   50033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:59:37.533815   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:59:37.533837   50033 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 18:59:37.533840   50033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:59:37.533860   50033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:59:37.533908   50033 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.multinode-922931 san=[127.0.0.1 192.168.39.171 localhost minikube multinode-922931]
	I0829 18:59:37.682359   50033 provision.go:177] copyRemoteCerts
	I0829 18:59:37.682418   50033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:59:37.682443   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:59:37.685371   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.685742   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.685763   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.685992   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 18:59:37.686152   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.686316   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 18:59:37.686465   50033 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/multinode-922931/id_rsa Username:docker}
	I0829 18:59:37.773066   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 18:59:37.773156   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:59:37.799212   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 18:59:37.799304   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0829 18:59:37.826153   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 18:59:37.826232   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 18:59:37.848886   50033 provision.go:87] duration metric: took 322.08952ms to configureAuth
	I0829 18:59:37.848912   50033 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:59:37.849146   50033 config.go:182] Loaded profile config "multinode-922931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:59:37.849228   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:59:37.852277   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.852669   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.852711   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.852890   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 18:59:37.853091   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.853244   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.853403   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 18:59:37.853556   50033 main.go:141] libmachine: Using SSH client type: native
	I0829 18:59:37.853761   50033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0829 18:59:37.853781   50033 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:01:08.561984   50033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:01:08.562012   50033 machine.go:96] duration metric: took 1m31.391127481s to provisionDockerMachine
	I0829 19:01:08.562051   50033 start.go:293] postStartSetup for "multinode-922931" (driver="kvm2")
	I0829 19:01:08.562065   50033 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:01:08.562085   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 19:01:08.562641   50033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:01:08.562676   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 19:01:08.565987   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.566416   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 19:01:08.566439   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.566622   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 19:01:08.566820   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 19:01:08.566983   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 19:01:08.567117   50033 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/multinode-922931/id_rsa Username:docker}
	I0829 19:01:08.653170   50033 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:01:08.657184   50033 command_runner.go:130] > NAME=Buildroot
	I0829 19:01:08.657205   50033 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0829 19:01:08.657210   50033 command_runner.go:130] > ID=buildroot
	I0829 19:01:08.657215   50033 command_runner.go:130] > VERSION_ID=2023.02.9
	I0829 19:01:08.657220   50033 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0829 19:01:08.657250   50033 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:01:08.657261   50033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:01:08.657323   50033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:01:08.657428   50033 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:01:08.657440   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /etc/ssl/certs/202592.pem
	I0829 19:01:08.657553   50033 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:01:08.666679   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:01:08.690150   50033 start.go:296] duration metric: took 128.083581ms for postStartSetup
	I0829 19:01:08.690207   50033 fix.go:56] duration metric: took 1m31.541290233s for fixHost
	I0829 19:01:08.690231   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 19:01:08.693191   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.693553   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 19:01:08.693611   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.693732   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 19:01:08.693911   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 19:01:08.694037   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 19:01:08.694271   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 19:01:08.694453   50033 main.go:141] libmachine: Using SSH client type: native
	I0829 19:01:08.694624   50033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0829 19:01:08.694637   50033 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:01:08.802640   50033 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724958068.775996167
	
	I0829 19:01:08.802674   50033 fix.go:216] guest clock: 1724958068.775996167
	I0829 19:01:08.802687   50033 fix.go:229] Guest: 2024-08-29 19:01:08.775996167 +0000 UTC Remote: 2024-08-29 19:01:08.690213116 +0000 UTC m=+91.672633372 (delta=85.783051ms)
	I0829 19:01:08.802725   50033 fix.go:200] guest clock delta is within tolerance: 85.783051ms
	I0829 19:01:08.802735   50033 start.go:83] releasing machines lock for "multinode-922931", held for 1m31.65384268s
	I0829 19:01:08.802773   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 19:01:08.803067   50033 main.go:141] libmachine: (multinode-922931) Calling .GetIP
	I0829 19:01:08.806035   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.806445   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 19:01:08.806465   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.806607   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 19:01:08.807113   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 19:01:08.807314   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 19:01:08.807398   50033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:01:08.807441   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 19:01:08.807553   50033 ssh_runner.go:195] Run: cat /version.json
	I0829 19:01:08.807582   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 19:01:08.810026   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.810359   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.810438   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 19:01:08.810473   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.810559   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 19:01:08.810716   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 19:01:08.810736   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 19:01:08.810753   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.810862   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 19:01:08.810920   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 19:01:08.810990   50033 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/multinode-922931/id_rsa Username:docker}
	I0829 19:01:08.811100   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 19:01:08.811243   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 19:01:08.811449   50033 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/multinode-922931/id_rsa Username:docker}
	I0829 19:01:08.919687   50033 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0829 19:01:08.919748   50033 command_runner.go:130] > {"iso_version": "v1.33.1-1724775098-19521", "kicbase_version": "v0.0.44-1724667927-19511", "minikube_version": "v1.33.1", "commit": "0d49494423856821e9b08161b42ba19c667a6f89"}
	I0829 19:01:08.919873   50033 ssh_runner.go:195] Run: systemctl --version
	I0829 19:01:08.926000   50033 command_runner.go:130] > systemd 252 (252)
	I0829 19:01:08.926030   50033 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0829 19:01:08.926375   50033 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:01:09.082295   50033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0829 19:01:09.089670   50033 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0829 19:01:09.089733   50033 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:01:09.089816   50033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:01:09.098593   50033 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 19:01:09.098615   50033 start.go:495] detecting cgroup driver to use...
	I0829 19:01:09.098688   50033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:01:09.113832   50033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:01:09.126834   50033 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:01:09.126905   50033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:01:09.139817   50033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:01:09.152680   50033 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:01:09.291184   50033 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:01:09.450754   50033 docker.go:233] disabling docker service ...
	I0829 19:01:09.450824   50033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:01:09.466198   50033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:01:09.480007   50033 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:01:09.613161   50033 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:01:09.750633   50033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:01:09.764220   50033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:01:09.781185   50033 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0829 19:01:09.781477   50033 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:01:09.781533   50033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.792622   50033 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:01:09.792699   50033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.805343   50033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.817143   50033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.827985   50033 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:01:09.837753   50033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.847432   50033 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.857381   50033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.867062   50033 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:01:09.875783   50033 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0829 19:01:09.875883   50033 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:01:09.884458   50033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:01:10.018103   50033 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:01:16.116115   50033 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.097970116s)
	I0829 19:01:16.116142   50033 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:01:16.116187   50033 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:01:16.120649   50033 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0829 19:01:16.120678   50033 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0829 19:01:16.120688   50033 command_runner.go:130] > Device: 0,22	Inode: 1331        Links: 1
	I0829 19:01:16.120697   50033 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0829 19:01:16.120704   50033 command_runner.go:130] > Access: 2024-08-29 19:01:15.991811964 +0000
	I0829 19:01:16.120714   50033 command_runner.go:130] > Modify: 2024-08-29 19:01:15.991811964 +0000
	I0829 19:01:16.120725   50033 command_runner.go:130] > Change: 2024-08-29 19:01:15.991811964 +0000
	I0829 19:01:16.120732   50033 command_runner.go:130] >  Birth: -
	I0829 19:01:16.120762   50033 start.go:563] Will wait 60s for crictl version
	I0829 19:01:16.120810   50033 ssh_runner.go:195] Run: which crictl
	I0829 19:01:16.124436   50033 command_runner.go:130] > /usr/bin/crictl
	I0829 19:01:16.124487   50033 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:01:16.159113   50033 command_runner.go:130] > Version:  0.1.0
	I0829 19:01:16.159137   50033 command_runner.go:130] > RuntimeName:  cri-o
	I0829 19:01:16.159155   50033 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0829 19:01:16.159163   50033 command_runner.go:130] > RuntimeApiVersion:  v1
	I0829 19:01:16.159245   50033 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:01:16.159302   50033 ssh_runner.go:195] Run: crio --version
	I0829 19:01:16.187486   50033 command_runner.go:130] > crio version 1.29.1
	I0829 19:01:16.187508   50033 command_runner.go:130] > Version:        1.29.1
	I0829 19:01:16.187516   50033 command_runner.go:130] > GitCommit:      unknown
	I0829 19:01:16.187521   50033 command_runner.go:130] > GitCommitDate:  unknown
	I0829 19:01:16.187526   50033 command_runner.go:130] > GitTreeState:   clean
	I0829 19:01:16.187533   50033 command_runner.go:130] > BuildDate:      2024-08-27T21:29:17Z
	I0829 19:01:16.187540   50033 command_runner.go:130] > GoVersion:      go1.21.6
	I0829 19:01:16.187546   50033 command_runner.go:130] > Compiler:       gc
	I0829 19:01:16.187552   50033 command_runner.go:130] > Platform:       linux/amd64
	I0829 19:01:16.187558   50033 command_runner.go:130] > Linkmode:       dynamic
	I0829 19:01:16.187564   50033 command_runner.go:130] > BuildTags:      
	I0829 19:01:16.187572   50033 command_runner.go:130] >   containers_image_ostree_stub
	I0829 19:01:16.187580   50033 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0829 19:01:16.187588   50033 command_runner.go:130] >   btrfs_noversion
	I0829 19:01:16.187599   50033 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0829 19:01:16.187607   50033 command_runner.go:130] >   libdm_no_deferred_remove
	I0829 19:01:16.187613   50033 command_runner.go:130] >   seccomp
	I0829 19:01:16.187621   50033 command_runner.go:130] > LDFlags:          unknown
	I0829 19:01:16.187627   50033 command_runner.go:130] > SeccompEnabled:   true
	I0829 19:01:16.187634   50033 command_runner.go:130] > AppArmorEnabled:  false
	I0829 19:01:16.188693   50033 ssh_runner.go:195] Run: crio --version
	I0829 19:01:16.216213   50033 command_runner.go:130] > crio version 1.29.1
	I0829 19:01:16.216233   50033 command_runner.go:130] > Version:        1.29.1
	I0829 19:01:16.216238   50033 command_runner.go:130] > GitCommit:      unknown
	I0829 19:01:16.216242   50033 command_runner.go:130] > GitCommitDate:  unknown
	I0829 19:01:16.216246   50033 command_runner.go:130] > GitTreeState:   clean
	I0829 19:01:16.216251   50033 command_runner.go:130] > BuildDate:      2024-08-27T21:29:17Z
	I0829 19:01:16.216255   50033 command_runner.go:130] > GoVersion:      go1.21.6
	I0829 19:01:16.216259   50033 command_runner.go:130] > Compiler:       gc
	I0829 19:01:16.216263   50033 command_runner.go:130] > Platform:       linux/amd64
	I0829 19:01:16.216267   50033 command_runner.go:130] > Linkmode:       dynamic
	I0829 19:01:16.216271   50033 command_runner.go:130] > BuildTags:      
	I0829 19:01:16.216276   50033 command_runner.go:130] >   containers_image_ostree_stub
	I0829 19:01:16.216280   50033 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0829 19:01:16.216293   50033 command_runner.go:130] >   btrfs_noversion
	I0829 19:01:16.216300   50033 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0829 19:01:16.216303   50033 command_runner.go:130] >   libdm_no_deferred_remove
	I0829 19:01:16.216307   50033 command_runner.go:130] >   seccomp
	I0829 19:01:16.216313   50033 command_runner.go:130] > LDFlags:          unknown
	I0829 19:01:16.216318   50033 command_runner.go:130] > SeccompEnabled:   true
	I0829 19:01:16.216325   50033 command_runner.go:130] > AppArmorEnabled:  false
	I0829 19:01:16.219123   50033 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:01:16.220547   50033 main.go:141] libmachine: (multinode-922931) Calling .GetIP
	I0829 19:01:16.223443   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:16.223792   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 19:01:16.223821   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:16.223987   50033 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:01:16.227881   50033 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0829 19:01:16.227994   50033 kubeadm.go:883] updating cluster {Name:multinode-922931 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-922931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.226 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:01:16.228123   50033 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:01:16.228179   50033 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:01:16.272868   50033 command_runner.go:130] > {
	I0829 19:01:16.272894   50033 command_runner.go:130] >   "images": [
	I0829 19:01:16.272916   50033 command_runner.go:130] >     {
	I0829 19:01:16.272926   50033 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0829 19:01:16.272933   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.272942   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0829 19:01:16.272948   50033 command_runner.go:130] >       ],
	I0829 19:01:16.272955   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.272967   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0829 19:01:16.272982   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0829 19:01:16.272990   50033 command_runner.go:130] >       ],
	I0829 19:01:16.272996   50033 command_runner.go:130] >       "size": "87165492",
	I0829 19:01:16.273002   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.273006   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.273017   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.273021   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.273027   50033 command_runner.go:130] >     },
	I0829 19:01:16.273030   50033 command_runner.go:130] >     {
	I0829 19:01:16.273036   50033 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0829 19:01:16.273041   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.273046   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0829 19:01:16.273050   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273053   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.273060   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0829 19:01:16.273069   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0829 19:01:16.273073   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273079   50033 command_runner.go:130] >       "size": "87190579",
	I0829 19:01:16.273083   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.273093   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.273100   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.273109   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.273116   50033 command_runner.go:130] >     },
	I0829 19:01:16.273124   50033 command_runner.go:130] >     {
	I0829 19:01:16.273136   50033 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0829 19:01:16.273145   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.273154   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0829 19:01:16.273162   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273171   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.273193   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0829 19:01:16.273207   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0829 19:01:16.273215   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273222   50033 command_runner.go:130] >       "size": "1363676",
	I0829 19:01:16.273230   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.273239   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.273249   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.273259   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.273266   50033 command_runner.go:130] >     },
	I0829 19:01:16.273275   50033 command_runner.go:130] >     {
	I0829 19:01:16.273287   50033 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0829 19:01:16.273297   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.273308   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0829 19:01:16.273324   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273333   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.273348   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0829 19:01:16.273377   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0829 19:01:16.273386   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273393   50033 command_runner.go:130] >       "size": "31470524",
	I0829 19:01:16.273405   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.273415   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.273423   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.273431   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.273440   50033 command_runner.go:130] >     },
	I0829 19:01:16.273449   50033 command_runner.go:130] >     {
	I0829 19:01:16.273461   50033 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0829 19:01:16.273470   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.273488   50033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0829 19:01:16.273496   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273505   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.273518   50033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0829 19:01:16.273532   50033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0829 19:01:16.273540   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273550   50033 command_runner.go:130] >       "size": "61245718",
	I0829 19:01:16.273558   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.273565   50033 command_runner.go:130] >       "username": "nonroot",
	I0829 19:01:16.273667   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.273791   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.273803   50033 command_runner.go:130] >     },
	I0829 19:01:16.273809   50033 command_runner.go:130] >     {
	I0829 19:01:16.273820   50033 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0829 19:01:16.273831   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.273845   50033 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0829 19:01:16.273867   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273874   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.273889   50033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0829 19:01:16.273905   50033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0829 19:01:16.273910   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273917   50033 command_runner.go:130] >       "size": "149009664",
	I0829 19:01:16.273923   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.273929   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.273939   50033 command_runner.go:130] >       },
	I0829 19:01:16.273967   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.274027   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.274034   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.274038   50033 command_runner.go:130] >     },
	I0829 19:01:16.274042   50033 command_runner.go:130] >     {
	I0829 19:01:16.274051   50033 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0829 19:01:16.274062   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.274072   50033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0829 19:01:16.274077   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274084   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.274114   50033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0829 19:01:16.274130   50033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0829 19:01:16.274134   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274144   50033 command_runner.go:130] >       "size": "95233506",
	I0829 19:01:16.274148   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.274153   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.274156   50033 command_runner.go:130] >       },
	I0829 19:01:16.274159   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.274163   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.274170   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.274175   50033 command_runner.go:130] >     },
	I0829 19:01:16.274180   50033 command_runner.go:130] >     {
	I0829 19:01:16.274191   50033 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0829 19:01:16.274198   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.274211   50033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0829 19:01:16.274217   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274224   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.274249   50033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0829 19:01:16.274264   50033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0829 19:01:16.274270   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274277   50033 command_runner.go:130] >       "size": "89437512",
	I0829 19:01:16.274283   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.274296   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.274301   50033 command_runner.go:130] >       },
	I0829 19:01:16.274310   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.274316   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.274322   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.274334   50033 command_runner.go:130] >     },
	I0829 19:01:16.274338   50033 command_runner.go:130] >     {
	I0829 19:01:16.274349   50033 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0829 19:01:16.274356   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.274364   50033 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0829 19:01:16.274370   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274381   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.274396   50033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0829 19:01:16.274413   50033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0829 19:01:16.274418   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274423   50033 command_runner.go:130] >       "size": "92728217",
	I0829 19:01:16.274426   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.274431   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.274438   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.274450   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.274456   50033 command_runner.go:130] >     },
	I0829 19:01:16.274461   50033 command_runner.go:130] >     {
	I0829 19:01:16.274472   50033 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0829 19:01:16.274478   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.274491   50033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0829 19:01:16.274497   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274504   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.274511   50033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0829 19:01:16.274526   50033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0829 19:01:16.274532   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274539   50033 command_runner.go:130] >       "size": "68420936",
	I0829 19:01:16.274545   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.274557   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.274563   50033 command_runner.go:130] >       },
	I0829 19:01:16.274569   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.274575   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.274581   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.274587   50033 command_runner.go:130] >     },
	I0829 19:01:16.274591   50033 command_runner.go:130] >     {
	I0829 19:01:16.274599   50033 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0829 19:01:16.274605   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.274618   50033 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0829 19:01:16.274625   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274642   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.274653   50033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0829 19:01:16.274670   50033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0829 19:01:16.274675   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274680   50033 command_runner.go:130] >       "size": "742080",
	I0829 19:01:16.274685   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.274692   50033 command_runner.go:130] >         "value": "65535"
	I0829 19:01:16.274697   50033 command_runner.go:130] >       },
	I0829 19:01:16.274704   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.274715   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.274721   50033 command_runner.go:130] >       "pinned": true
	I0829 19:01:16.274727   50033 command_runner.go:130] >     }
	I0829 19:01:16.274732   50033 command_runner.go:130] >   ]
	I0829 19:01:16.274736   50033 command_runner.go:130] > }
	I0829 19:01:16.275025   50033 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:01:16.275036   50033 crio.go:433] Images already preloaded, skipping extraction
	I0829 19:01:16.275136   50033 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:01:16.307070   50033 command_runner.go:130] > {
	I0829 19:01:16.307095   50033 command_runner.go:130] >   "images": [
	I0829 19:01:16.307103   50033 command_runner.go:130] >     {
	I0829 19:01:16.307113   50033 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0829 19:01:16.307120   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307128   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0829 19:01:16.307133   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307138   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.307152   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0829 19:01:16.307169   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0829 19:01:16.307177   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307185   50033 command_runner.go:130] >       "size": "87165492",
	I0829 19:01:16.307192   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.307202   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.307211   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.307218   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.307224   50033 command_runner.go:130] >     },
	I0829 19:01:16.307232   50033 command_runner.go:130] >     {
	I0829 19:01:16.307242   50033 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0829 19:01:16.307249   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307258   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0829 19:01:16.307264   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307271   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.307283   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0829 19:01:16.307295   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0829 19:01:16.307302   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307309   50033 command_runner.go:130] >       "size": "87190579",
	I0829 19:01:16.307316   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.307326   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.307335   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.307342   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.307348   50033 command_runner.go:130] >     },
	I0829 19:01:16.307365   50033 command_runner.go:130] >     {
	I0829 19:01:16.307376   50033 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0829 19:01:16.307385   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307395   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0829 19:01:16.307402   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307410   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.307421   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0829 19:01:16.307434   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0829 19:01:16.307441   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307451   50033 command_runner.go:130] >       "size": "1363676",
	I0829 19:01:16.307460   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.307467   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.307484   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.307493   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.307499   50033 command_runner.go:130] >     },
	I0829 19:01:16.307505   50033 command_runner.go:130] >     {
	I0829 19:01:16.307516   50033 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0829 19:01:16.307524   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307534   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0829 19:01:16.307542   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307550   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.307566   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0829 19:01:16.307586   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0829 19:01:16.307595   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307603   50033 command_runner.go:130] >       "size": "31470524",
	I0829 19:01:16.307611   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.307617   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.307624   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.307633   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.307640   50033 command_runner.go:130] >     },
	I0829 19:01:16.307648   50033 command_runner.go:130] >     {
	I0829 19:01:16.307659   50033 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0829 19:01:16.307668   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307677   50033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0829 19:01:16.307685   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307692   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.307707   50033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0829 19:01:16.307722   50033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0829 19:01:16.307731   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307739   50033 command_runner.go:130] >       "size": "61245718",
	I0829 19:01:16.307750   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.307760   50033 command_runner.go:130] >       "username": "nonroot",
	I0829 19:01:16.307769   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.307779   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.307786   50033 command_runner.go:130] >     },
	I0829 19:01:16.307792   50033 command_runner.go:130] >     {
	I0829 19:01:16.307803   50033 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0829 19:01:16.307812   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307820   50033 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0829 19:01:16.307829   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307836   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.307850   50033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0829 19:01:16.307864   50033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0829 19:01:16.307873   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307880   50033 command_runner.go:130] >       "size": "149009664",
	I0829 19:01:16.307887   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.307897   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.307909   50033 command_runner.go:130] >       },
	I0829 19:01:16.307918   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.307925   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.307935   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.307943   50033 command_runner.go:130] >     },
	I0829 19:01:16.307950   50033 command_runner.go:130] >     {
	I0829 19:01:16.307960   50033 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0829 19:01:16.307969   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307981   50033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0829 19:01:16.307990   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307997   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.308013   50033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0829 19:01:16.308028   50033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0829 19:01:16.308037   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308045   50033 command_runner.go:130] >       "size": "95233506",
	I0829 19:01:16.308054   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.308064   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.308072   50033 command_runner.go:130] >       },
	I0829 19:01:16.308080   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.308090   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.308098   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.308106   50033 command_runner.go:130] >     },
	I0829 19:01:16.308113   50033 command_runner.go:130] >     {
	I0829 19:01:16.308125   50033 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0829 19:01:16.308133   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.308144   50033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0829 19:01:16.308152   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308160   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.308187   50033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0829 19:01:16.308202   50033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0829 19:01:16.308208   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308216   50033 command_runner.go:130] >       "size": "89437512",
	I0829 19:01:16.308225   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.308232   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.308240   50033 command_runner.go:130] >       },
	I0829 19:01:16.308248   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.308257   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.308265   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.308273   50033 command_runner.go:130] >     },
	I0829 19:01:16.308279   50033 command_runner.go:130] >     {
	I0829 19:01:16.308290   50033 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0829 19:01:16.308299   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.308310   50033 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0829 19:01:16.308318   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308324   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.308337   50033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0829 19:01:16.308360   50033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0829 19:01:16.308369   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308376   50033 command_runner.go:130] >       "size": "92728217",
	I0829 19:01:16.308384   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.308395   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.308404   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.308412   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.308420   50033 command_runner.go:130] >     },
	I0829 19:01:16.308427   50033 command_runner.go:130] >     {
	I0829 19:01:16.308439   50033 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0829 19:01:16.308448   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.308458   50033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0829 19:01:16.308465   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308475   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.308489   50033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0829 19:01:16.308504   50033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0829 19:01:16.308512   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308519   50033 command_runner.go:130] >       "size": "68420936",
	I0829 19:01:16.308528   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.308535   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.308543   50033 command_runner.go:130] >       },
	I0829 19:01:16.308550   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.308559   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.308567   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.308575   50033 command_runner.go:130] >     },
	I0829 19:01:16.308581   50033 command_runner.go:130] >     {
	I0829 19:01:16.308592   50033 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0829 19:01:16.308601   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.308609   50033 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0829 19:01:16.308619   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308626   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.308640   50033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0829 19:01:16.308655   50033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0829 19:01:16.308663   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308671   50033 command_runner.go:130] >       "size": "742080",
	I0829 19:01:16.308679   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.308687   50033 command_runner.go:130] >         "value": "65535"
	I0829 19:01:16.308695   50033 command_runner.go:130] >       },
	I0829 19:01:16.308703   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.308712   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.308722   50033 command_runner.go:130] >       "pinned": true
	I0829 19:01:16.308728   50033 command_runner.go:130] >     }
	I0829 19:01:16.308736   50033 command_runner.go:130] >   ]
	I0829 19:01:16.308742   50033 command_runner.go:130] > }
	I0829 19:01:16.308868   50033 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:01:16.308880   50033 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:01:16.308889   50033 kubeadm.go:934] updating node { 192.168.39.171 8443 v1.31.0 crio true true} ...
	I0829 19:01:16.309014   50033 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-922931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-922931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:01:16.309135   50033 ssh_runner.go:195] Run: crio config
	I0829 19:01:16.347133   50033 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0829 19:01:16.347160   50033 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0829 19:01:16.347170   50033 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0829 19:01:16.347174   50033 command_runner.go:130] > #
	I0829 19:01:16.347184   50033 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0829 19:01:16.347192   50033 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0829 19:01:16.347201   50033 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0829 19:01:16.347211   50033 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0829 19:01:16.347218   50033 command_runner.go:130] > # reload'.
	I0829 19:01:16.347228   50033 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0829 19:01:16.347240   50033 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0829 19:01:16.347249   50033 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0829 19:01:16.347259   50033 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0829 19:01:16.347268   50033 command_runner.go:130] > [crio]
	I0829 19:01:16.347278   50033 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0829 19:01:16.347290   50033 command_runner.go:130] > # containers images, in this directory.
	I0829 19:01:16.347374   50033 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0829 19:01:16.347402   50033 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0829 19:01:16.347415   50033 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0829 19:01:16.347430   50033 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0829 19:01:16.347439   50033 command_runner.go:130] > # imagestore = ""
	I0829 19:01:16.347448   50033 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0829 19:01:16.347460   50033 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0829 19:01:16.347470   50033 command_runner.go:130] > storage_driver = "overlay"
	I0829 19:01:16.347482   50033 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0829 19:01:16.347494   50033 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0829 19:01:16.347504   50033 command_runner.go:130] > storage_option = [
	I0829 19:01:16.347514   50033 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0829 19:01:16.347522   50033 command_runner.go:130] > ]
	I0829 19:01:16.347533   50033 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0829 19:01:16.347557   50033 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0829 19:01:16.347569   50033 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0829 19:01:16.347580   50033 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0829 19:01:16.347593   50033 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0829 19:01:16.347604   50033 command_runner.go:130] > # always happen on a node reboot
	I0829 19:01:16.347615   50033 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0829 19:01:16.347630   50033 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0829 19:01:16.347642   50033 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0829 19:01:16.347655   50033 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0829 19:01:16.347668   50033 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0829 19:01:16.347682   50033 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0829 19:01:16.347699   50033 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0829 19:01:16.347711   50033 command_runner.go:130] > # internal_wipe = true
	I0829 19:01:16.347724   50033 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0829 19:01:16.347737   50033 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0829 19:01:16.347745   50033 command_runner.go:130] > # internal_repair = false
	I0829 19:01:16.347774   50033 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0829 19:01:16.347790   50033 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0829 19:01:16.347803   50033 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0829 19:01:16.347811   50033 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0829 19:01:16.347824   50033 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0829 19:01:16.347830   50033 command_runner.go:130] > [crio.api]
	I0829 19:01:16.347839   50033 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0829 19:01:16.347849   50033 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0829 19:01:16.347861   50033 command_runner.go:130] > # IP address on which the stream server will listen.
	I0829 19:01:16.347871   50033 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0829 19:01:16.347884   50033 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0829 19:01:16.347896   50033 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0829 19:01:16.347904   50033 command_runner.go:130] > # stream_port = "0"
	I0829 19:01:16.347913   50033 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0829 19:01:16.347922   50033 command_runner.go:130] > # stream_enable_tls = false
	I0829 19:01:16.347931   50033 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0829 19:01:16.347942   50033 command_runner.go:130] > # stream_idle_timeout = ""
	I0829 19:01:16.347952   50033 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0829 19:01:16.347963   50033 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0829 19:01:16.347971   50033 command_runner.go:130] > # minutes.
	I0829 19:01:16.347979   50033 command_runner.go:130] > # stream_tls_cert = ""
	I0829 19:01:16.347998   50033 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0829 19:01:16.348010   50033 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0829 19:01:16.348040   50033 command_runner.go:130] > # stream_tls_key = ""
	I0829 19:01:16.348059   50033 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0829 19:01:16.348073   50033 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0829 19:01:16.348103   50033 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0829 19:01:16.348114   50033 command_runner.go:130] > # stream_tls_ca = ""
	I0829 19:01:16.348129   50033 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0829 19:01:16.348140   50033 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0829 19:01:16.348151   50033 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0829 19:01:16.348165   50033 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0829 19:01:16.348175   50033 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0829 19:01:16.348187   50033 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0829 19:01:16.348195   50033 command_runner.go:130] > [crio.runtime]
	I0829 19:01:16.348204   50033 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0829 19:01:16.348213   50033 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0829 19:01:16.348222   50033 command_runner.go:130] > # "nofile=1024:2048"
	I0829 19:01:16.348232   50033 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0829 19:01:16.348241   50033 command_runner.go:130] > # default_ulimits = [
	I0829 19:01:16.348247   50033 command_runner.go:130] > # ]
	I0829 19:01:16.348256   50033 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0829 19:01:16.348266   50033 command_runner.go:130] > # no_pivot = false
	I0829 19:01:16.348275   50033 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0829 19:01:16.348287   50033 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0829 19:01:16.348298   50033 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0829 19:01:16.348308   50033 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0829 19:01:16.348318   50033 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0829 19:01:16.348328   50033 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0829 19:01:16.348335   50033 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0829 19:01:16.348344   50033 command_runner.go:130] > # Cgroup setting for conmon
	I0829 19:01:16.348374   50033 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0829 19:01:16.348386   50033 command_runner.go:130] > conmon_cgroup = "pod"
	I0829 19:01:16.348399   50033 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0829 19:01:16.348410   50033 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0829 19:01:16.348422   50033 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0829 19:01:16.348430   50033 command_runner.go:130] > conmon_env = [
	I0829 19:01:16.348435   50033 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0829 19:01:16.348445   50033 command_runner.go:130] > ]
	I0829 19:01:16.348466   50033 command_runner.go:130] > # Additional environment variables to set for all the
	I0829 19:01:16.348479   50033 command_runner.go:130] > # containers. These are overridden if set in the
	I0829 19:01:16.348490   50033 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0829 19:01:16.348499   50033 command_runner.go:130] > # default_env = [
	I0829 19:01:16.348507   50033 command_runner.go:130] > # ]
	I0829 19:01:16.348516   50033 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0829 19:01:16.348528   50033 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0829 19:01:16.348535   50033 command_runner.go:130] > # selinux = false
	I0829 19:01:16.348544   50033 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0829 19:01:16.348557   50033 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0829 19:01:16.348570   50033 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0829 19:01:16.348578   50033 command_runner.go:130] > # seccomp_profile = ""
	I0829 19:01:16.348590   50033 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0829 19:01:16.348601   50033 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0829 19:01:16.348613   50033 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0829 19:01:16.348619   50033 command_runner.go:130] > # which might increase security.
	I0829 19:01:16.348627   50033 command_runner.go:130] > # This option is currently deprecated,
	I0829 19:01:16.348639   50033 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0829 19:01:16.348650   50033 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0829 19:01:16.348662   50033 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0829 19:01:16.348673   50033 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0829 19:01:16.348685   50033 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0829 19:01:16.348696   50033 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0829 19:01:16.348704   50033 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:01:16.348710   50033 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0829 19:01:16.348722   50033 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0829 19:01:16.348732   50033 command_runner.go:130] > # the cgroup blockio controller.
	I0829 19:01:16.348742   50033 command_runner.go:130] > # blockio_config_file = ""
	I0829 19:01:16.348755   50033 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0829 19:01:16.348765   50033 command_runner.go:130] > # blockio parameters.
	I0829 19:01:16.348901   50033 command_runner.go:130] > # blockio_reload = false
	I0829 19:01:16.348918   50033 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0829 19:01:16.348927   50033 command_runner.go:130] > # irqbalance daemon.
	I0829 19:01:16.349177   50033 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0829 19:01:16.349191   50033 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0829 19:01:16.349204   50033 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0829 19:01:16.349224   50033 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0829 19:01:16.349402   50033 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0829 19:01:16.349417   50033 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0829 19:01:16.349425   50033 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:01:16.350028   50033 command_runner.go:130] > # rdt_config_file = ""
	I0829 19:01:16.350044   50033 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0829 19:01:16.350049   50033 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0829 19:01:16.350084   50033 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0829 19:01:16.350109   50033 command_runner.go:130] > # separate_pull_cgroup = ""
	I0829 19:01:16.350119   50033 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0829 19:01:16.350132   50033 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0829 19:01:16.350140   50033 command_runner.go:130] > # will be added.
	I0829 19:01:16.350144   50033 command_runner.go:130] > # default_capabilities = [
	I0829 19:01:16.350147   50033 command_runner.go:130] > # 	"CHOWN",
	I0829 19:01:16.350151   50033 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0829 19:01:16.350157   50033 command_runner.go:130] > # 	"FSETID",
	I0829 19:01:16.350161   50033 command_runner.go:130] > # 	"FOWNER",
	I0829 19:01:16.350165   50033 command_runner.go:130] > # 	"SETGID",
	I0829 19:01:16.350171   50033 command_runner.go:130] > # 	"SETUID",
	I0829 19:01:16.350175   50033 command_runner.go:130] > # 	"SETPCAP",
	I0829 19:01:16.350181   50033 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0829 19:01:16.350186   50033 command_runner.go:130] > # 	"KILL",
	I0829 19:01:16.350194   50033 command_runner.go:130] > # ]
	I0829 19:01:16.350205   50033 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0829 19:01:16.350219   50033 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0829 19:01:16.350230   50033 command_runner.go:130] > # add_inheritable_capabilities = false
	I0829 19:01:16.350242   50033 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0829 19:01:16.350255   50033 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0829 19:01:16.350268   50033 command_runner.go:130] > default_sysctls = [
	I0829 19:01:16.350273   50033 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0829 19:01:16.350278   50033 command_runner.go:130] > ]
	I0829 19:01:16.350282   50033 command_runner.go:130] > # List of devices on the host that a
	I0829 19:01:16.350288   50033 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0829 19:01:16.350294   50033 command_runner.go:130] > # allowed_devices = [
	I0829 19:01:16.350297   50033 command_runner.go:130] > # 	"/dev/fuse",
	I0829 19:01:16.350301   50033 command_runner.go:130] > # ]
	I0829 19:01:16.350316   50033 command_runner.go:130] > # List of additional devices. specified as
	I0829 19:01:16.350330   50033 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0829 19:01:16.350339   50033 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0829 19:01:16.350351   50033 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0829 19:01:16.350367   50033 command_runner.go:130] > # additional_devices = [
	I0829 19:01:16.350376   50033 command_runner.go:130] > # ]
	I0829 19:01:16.350383   50033 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0829 19:01:16.350395   50033 command_runner.go:130] > # cdi_spec_dirs = [
	I0829 19:01:16.350404   50033 command_runner.go:130] > # 	"/etc/cdi",
	I0829 19:01:16.350410   50033 command_runner.go:130] > # 	"/var/run/cdi",
	I0829 19:01:16.350416   50033 command_runner.go:130] > # ]
	I0829 19:01:16.350427   50033 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0829 19:01:16.350439   50033 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0829 19:01:16.350449   50033 command_runner.go:130] > # Defaults to false.
	I0829 19:01:16.350456   50033 command_runner.go:130] > # device_ownership_from_security_context = false
	I0829 19:01:16.350467   50033 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0829 19:01:16.350475   50033 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0829 19:01:16.350479   50033 command_runner.go:130] > # hooks_dir = [
	I0829 19:01:16.350487   50033 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0829 19:01:16.350492   50033 command_runner.go:130] > # ]
	I0829 19:01:16.350505   50033 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0829 19:01:16.350518   50033 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0829 19:01:16.350529   50033 command_runner.go:130] > # its default mounts from the following two files:
	I0829 19:01:16.350536   50033 command_runner.go:130] > #
	I0829 19:01:16.350545   50033 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0829 19:01:16.350558   50033 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0829 19:01:16.350567   50033 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0829 19:01:16.350571   50033 command_runner.go:130] > #
	I0829 19:01:16.350583   50033 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0829 19:01:16.350596   50033 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0829 19:01:16.350609   50033 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0829 19:01:16.350620   50033 command_runner.go:130] > #      only add mounts it finds in this file.
	I0829 19:01:16.350625   50033 command_runner.go:130] > #
	I0829 19:01:16.350634   50033 command_runner.go:130] > # default_mounts_file = ""
	I0829 19:01:16.350644   50033 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0829 19:01:16.350656   50033 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0829 19:01:16.350670   50033 command_runner.go:130] > pids_limit = 1024
	I0829 19:01:16.350682   50033 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0829 19:01:16.350695   50033 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0829 19:01:16.350705   50033 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0829 19:01:16.350721   50033 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0829 19:01:16.350730   50033 command_runner.go:130] > # log_size_max = -1
	I0829 19:01:16.350741   50033 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0829 19:01:16.350752   50033 command_runner.go:130] > # log_to_journald = false
	I0829 19:01:16.350762   50033 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0829 19:01:16.350770   50033 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0829 19:01:16.350780   50033 command_runner.go:130] > # Path to directory for container attach sockets.
	I0829 19:01:16.350791   50033 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0829 19:01:16.350800   50033 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0829 19:01:16.350810   50033 command_runner.go:130] > # bind_mount_prefix = ""
	I0829 19:01:16.350818   50033 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0829 19:01:16.350827   50033 command_runner.go:130] > # read_only = false
	I0829 19:01:16.350836   50033 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0829 19:01:16.350848   50033 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0829 19:01:16.350857   50033 command_runner.go:130] > # live configuration reload.
	I0829 19:01:16.350864   50033 command_runner.go:130] > # log_level = "info"
	I0829 19:01:16.350871   50033 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0829 19:01:16.350881   50033 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:01:16.350888   50033 command_runner.go:130] > # log_filter = ""
	I0829 19:01:16.350900   50033 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0829 19:01:16.350913   50033 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0829 19:01:16.350922   50033 command_runner.go:130] > # separated by comma.
	I0829 19:01:16.350934   50033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:01:16.350943   50033 command_runner.go:130] > # uid_mappings = ""
	I0829 19:01:16.350949   50033 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0829 19:01:16.350959   50033 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0829 19:01:16.350969   50033 command_runner.go:130] > # separated by comma.
	I0829 19:01:16.350981   50033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:01:16.350990   50033 command_runner.go:130] > # gid_mappings = ""
	I0829 19:01:16.351004   50033 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0829 19:01:16.351015   50033 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0829 19:01:16.351027   50033 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0829 19:01:16.351043   50033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:01:16.351052   50033 command_runner.go:130] > # minimum_mappable_uid = -1
	I0829 19:01:16.351061   50033 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0829 19:01:16.351074   50033 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0829 19:01:16.351085   50033 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0829 19:01:16.351100   50033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:01:16.351109   50033 command_runner.go:130] > # minimum_mappable_gid = -1
	I0829 19:01:16.351118   50033 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0829 19:01:16.351129   50033 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0829 19:01:16.351140   50033 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0829 19:01:16.351147   50033 command_runner.go:130] > # ctr_stop_timeout = 30
	I0829 19:01:16.351158   50033 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0829 19:01:16.351168   50033 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0829 19:01:16.351178   50033 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0829 19:01:16.351186   50033 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0829 19:01:16.351197   50033 command_runner.go:130] > drop_infra_ctr = false
	I0829 19:01:16.351206   50033 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0829 19:01:16.351218   50033 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0829 19:01:16.351231   50033 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0829 19:01:16.351240   50033 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0829 19:01:16.351248   50033 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0829 19:01:16.351257   50033 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0829 19:01:16.351262   50033 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0829 19:01:16.351268   50033 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0829 19:01:16.351271   50033 command_runner.go:130] > # shared_cpuset = ""
	I0829 19:01:16.351277   50033 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0829 19:01:16.351286   50033 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0829 19:01:16.351293   50033 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0829 19:01:16.351305   50033 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0829 19:01:16.351312   50033 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0829 19:01:16.351324   50033 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0829 19:01:16.351334   50033 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0829 19:01:16.351344   50033 command_runner.go:130] > # enable_criu_support = false
	I0829 19:01:16.351352   50033 command_runner.go:130] > # Enable/disable the generation of the container,
	I0829 19:01:16.351368   50033 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0829 19:01:16.351378   50033 command_runner.go:130] > # enable_pod_events = false
	I0829 19:01:16.351396   50033 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0829 19:01:16.351409   50033 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0829 19:01:16.351420   50033 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0829 19:01:16.351429   50033 command_runner.go:130] > # default_runtime = "runc"
	I0829 19:01:16.351437   50033 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0829 19:01:16.351447   50033 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0829 19:01:16.351460   50033 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0829 19:01:16.351466   50033 command_runner.go:130] > # creation as a file is not desired either.
	I0829 19:01:16.351475   50033 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0829 19:01:16.351489   50033 command_runner.go:130] > # the hostname is being managed dynamically.
	I0829 19:01:16.351499   50033 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0829 19:01:16.351505   50033 command_runner.go:130] > # ]
	I0829 19:01:16.351517   50033 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0829 19:01:16.351530   50033 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0829 19:01:16.351541   50033 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0829 19:01:16.351550   50033 command_runner.go:130] > # Each entry in the table should follow the format:
	I0829 19:01:16.351555   50033 command_runner.go:130] > #
	I0829 19:01:16.351559   50033 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0829 19:01:16.351566   50033 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0829 19:01:16.351609   50033 command_runner.go:130] > # runtime_type = "oci"
	I0829 19:01:16.351617   50033 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0829 19:01:16.351622   50033 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0829 19:01:16.351626   50033 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0829 19:01:16.351631   50033 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0829 19:01:16.351636   50033 command_runner.go:130] > # monitor_env = []
	I0829 19:01:16.351641   50033 command_runner.go:130] > # privileged_without_host_devices = false
	I0829 19:01:16.351647   50033 command_runner.go:130] > # allowed_annotations = []
	I0829 19:01:16.351652   50033 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0829 19:01:16.351657   50033 command_runner.go:130] > # Where:
	I0829 19:01:16.351663   50033 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0829 19:01:16.351670   50033 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0829 19:01:16.351679   50033 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0829 19:01:16.351687   50033 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0829 19:01:16.351694   50033 command_runner.go:130] > #   in $PATH.
	I0829 19:01:16.351700   50033 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0829 19:01:16.351707   50033 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0829 19:01:16.351716   50033 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0829 19:01:16.351722   50033 command_runner.go:130] > #   state.
	I0829 19:01:16.351728   50033 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0829 19:01:16.351735   50033 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0829 19:01:16.351741   50033 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0829 19:01:16.351749   50033 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0829 19:01:16.351758   50033 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0829 19:01:16.351766   50033 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0829 19:01:16.351772   50033 command_runner.go:130] > #   The currently recognized values are:
	I0829 19:01:16.351778   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0829 19:01:16.351787   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0829 19:01:16.351797   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0829 19:01:16.351805   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0829 19:01:16.351814   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0829 19:01:16.351822   50033 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0829 19:01:16.351828   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0829 19:01:16.351836   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0829 19:01:16.351842   50033 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0829 19:01:16.351849   50033 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0829 19:01:16.351854   50033 command_runner.go:130] > #   deprecated option "conmon".
	I0829 19:01:16.351861   50033 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0829 19:01:16.351868   50033 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0829 19:01:16.351874   50033 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0829 19:01:16.351881   50033 command_runner.go:130] > #   should be moved to the container's cgroup
	I0829 19:01:16.351887   50033 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0829 19:01:16.351894   50033 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0829 19:01:16.351900   50033 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0829 19:01:16.351907   50033 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0829 19:01:16.351910   50033 command_runner.go:130] > #
	I0829 19:01:16.351914   50033 command_runner.go:130] > # Using the seccomp notifier feature:
	I0829 19:01:16.351917   50033 command_runner.go:130] > #
	I0829 19:01:16.351923   50033 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0829 19:01:16.351931   50033 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0829 19:01:16.351936   50033 command_runner.go:130] > #
	I0829 19:01:16.351942   50033 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0829 19:01:16.351950   50033 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0829 19:01:16.351958   50033 command_runner.go:130] > #
	I0829 19:01:16.351966   50033 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0829 19:01:16.351975   50033 command_runner.go:130] > # feature.
	I0829 19:01:16.351978   50033 command_runner.go:130] > #
	I0829 19:01:16.351988   50033 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0829 19:01:16.351995   50033 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0829 19:01:16.352003   50033 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0829 19:01:16.352011   50033 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0829 19:01:16.352017   50033 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0829 19:01:16.352028   50033 command_runner.go:130] > #
	I0829 19:01:16.352033   50033 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0829 19:01:16.352042   50033 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0829 19:01:16.352052   50033 command_runner.go:130] > #
	I0829 19:01:16.352058   50033 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0829 19:01:16.352065   50033 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0829 19:01:16.352069   50033 command_runner.go:130] > #
	I0829 19:01:16.352077   50033 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0829 19:01:16.352082   50033 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0829 19:01:16.352088   50033 command_runner.go:130] > # limitation.
	I0829 19:01:16.352092   50033 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0829 19:01:16.352098   50033 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0829 19:01:16.352102   50033 command_runner.go:130] > runtime_type = "oci"
	I0829 19:01:16.352108   50033 command_runner.go:130] > runtime_root = "/run/runc"
	I0829 19:01:16.352112   50033 command_runner.go:130] > runtime_config_path = ""
	I0829 19:01:16.352119   50033 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0829 19:01:16.352123   50033 command_runner.go:130] > monitor_cgroup = "pod"
	I0829 19:01:16.352129   50033 command_runner.go:130] > monitor_exec_cgroup = ""
	I0829 19:01:16.352133   50033 command_runner.go:130] > monitor_env = [
	I0829 19:01:16.352140   50033 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0829 19:01:16.352143   50033 command_runner.go:130] > ]
	I0829 19:01:16.352147   50033 command_runner.go:130] > privileged_without_host_devices = false
	I0829 19:01:16.352156   50033 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0829 19:01:16.352161   50033 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0829 19:01:16.352167   50033 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0829 19:01:16.352176   50033 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0829 19:01:16.352185   50033 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0829 19:01:16.352198   50033 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0829 19:01:16.352209   50033 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0829 19:01:16.352218   50033 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0829 19:01:16.352226   50033 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0829 19:01:16.352232   50033 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0829 19:01:16.352236   50033 command_runner.go:130] > # Example:
	I0829 19:01:16.352240   50033 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0829 19:01:16.352244   50033 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0829 19:01:16.352248   50033 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0829 19:01:16.352253   50033 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0829 19:01:16.352256   50033 command_runner.go:130] > # cpuset = 0
	I0829 19:01:16.352260   50033 command_runner.go:130] > # cpushares = "0-1"
	I0829 19:01:16.352263   50033 command_runner.go:130] > # Where:
	I0829 19:01:16.352270   50033 command_runner.go:130] > # The workload name is workload-type.
	I0829 19:01:16.352276   50033 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0829 19:01:16.352281   50033 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0829 19:01:16.352286   50033 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0829 19:01:16.352293   50033 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0829 19:01:16.352299   50033 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0829 19:01:16.352303   50033 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0829 19:01:16.352309   50033 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0829 19:01:16.352313   50033 command_runner.go:130] > # Default value is set to true
	I0829 19:01:16.352317   50033 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0829 19:01:16.352322   50033 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0829 19:01:16.352326   50033 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0829 19:01:16.352330   50033 command_runner.go:130] > # Default value is set to 'false'
	I0829 19:01:16.352334   50033 command_runner.go:130] > # disable_hostport_mapping = false
	I0829 19:01:16.352340   50033 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0829 19:01:16.352342   50033 command_runner.go:130] > #
	I0829 19:01:16.352348   50033 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0829 19:01:16.352354   50033 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0829 19:01:16.352362   50033 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0829 19:01:16.352368   50033 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0829 19:01:16.352373   50033 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0829 19:01:16.352376   50033 command_runner.go:130] > [crio.image]
	I0829 19:01:16.352382   50033 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0829 19:01:16.352390   50033 command_runner.go:130] > # default_transport = "docker://"
	I0829 19:01:16.352396   50033 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0829 19:01:16.352402   50033 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0829 19:01:16.352405   50033 command_runner.go:130] > # global_auth_file = ""
	I0829 19:01:16.352410   50033 command_runner.go:130] > # The image used to instantiate infra containers.
	I0829 19:01:16.352414   50033 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:01:16.352420   50033 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0829 19:01:16.352428   50033 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0829 19:01:16.352434   50033 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0829 19:01:16.352442   50033 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:01:16.352446   50033 command_runner.go:130] > # pause_image_auth_file = ""
	I0829 19:01:16.352454   50033 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0829 19:01:16.352459   50033 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0829 19:01:16.352469   50033 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0829 19:01:16.352476   50033 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0829 19:01:16.352483   50033 command_runner.go:130] > # pause_command = "/pause"
	I0829 19:01:16.352489   50033 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0829 19:01:16.352496   50033 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0829 19:01:16.352503   50033 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0829 19:01:16.352510   50033 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0829 19:01:16.352516   50033 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0829 19:01:16.352523   50033 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0829 19:01:16.352530   50033 command_runner.go:130] > # pinned_images = [
	I0829 19:01:16.352533   50033 command_runner.go:130] > # ]
	I0829 19:01:16.352540   50033 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0829 19:01:16.352546   50033 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0829 19:01:16.352555   50033 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0829 19:01:16.352561   50033 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0829 19:01:16.352568   50033 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0829 19:01:16.352572   50033 command_runner.go:130] > # signature_policy = ""
	I0829 19:01:16.352579   50033 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0829 19:01:16.352585   50033 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0829 19:01:16.352593   50033 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0829 19:01:16.352600   50033 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0829 19:01:16.352609   50033 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0829 19:01:16.352616   50033 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0829 19:01:16.352626   50033 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0829 19:01:16.352634   50033 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0829 19:01:16.352639   50033 command_runner.go:130] > # changing them here.
	I0829 19:01:16.352645   50033 command_runner.go:130] > # insecure_registries = [
	I0829 19:01:16.352648   50033 command_runner.go:130] > # ]
	I0829 19:01:16.352654   50033 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0829 19:01:16.352661   50033 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0829 19:01:16.352665   50033 command_runner.go:130] > # image_volumes = "mkdir"
	I0829 19:01:16.352672   50033 command_runner.go:130] > # Temporary directory to use for storing big files
	I0829 19:01:16.352676   50033 command_runner.go:130] > # big_files_temporary_dir = ""
	I0829 19:01:16.352684   50033 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0829 19:01:16.352689   50033 command_runner.go:130] > # CNI plugins.
	I0829 19:01:16.352693   50033 command_runner.go:130] > [crio.network]
	I0829 19:01:16.352700   50033 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0829 19:01:16.352708   50033 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0829 19:01:16.352714   50033 command_runner.go:130] > # cni_default_network = ""
	I0829 19:01:16.352720   50033 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0829 19:01:16.352726   50033 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0829 19:01:16.352732   50033 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0829 19:01:16.352737   50033 command_runner.go:130] > # plugin_dirs = [
	I0829 19:01:16.352741   50033 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0829 19:01:16.352746   50033 command_runner.go:130] > # ]
	I0829 19:01:16.352751   50033 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0829 19:01:16.352757   50033 command_runner.go:130] > [crio.metrics]
	I0829 19:01:16.352762   50033 command_runner.go:130] > # Globally enable or disable metrics support.
	I0829 19:01:16.352768   50033 command_runner.go:130] > enable_metrics = true
	I0829 19:01:16.352772   50033 command_runner.go:130] > # Specify enabled metrics collectors.
	I0829 19:01:16.352778   50033 command_runner.go:130] > # Per default all metrics are enabled.
	I0829 19:01:16.352784   50033 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0829 19:01:16.352792   50033 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0829 19:01:16.352800   50033 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0829 19:01:16.352803   50033 command_runner.go:130] > # metrics_collectors = [
	I0829 19:01:16.352809   50033 command_runner.go:130] > # 	"operations",
	I0829 19:01:16.352814   50033 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0829 19:01:16.352821   50033 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0829 19:01:16.352824   50033 command_runner.go:130] > # 	"operations_errors",
	I0829 19:01:16.352833   50033 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0829 19:01:16.352840   50033 command_runner.go:130] > # 	"image_pulls_by_name",
	I0829 19:01:16.352844   50033 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0829 19:01:16.352850   50033 command_runner.go:130] > # 	"image_pulls_failures",
	I0829 19:01:16.352855   50033 command_runner.go:130] > # 	"image_pulls_successes",
	I0829 19:01:16.352861   50033 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0829 19:01:16.352865   50033 command_runner.go:130] > # 	"image_layer_reuse",
	I0829 19:01:16.352871   50033 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0829 19:01:16.352876   50033 command_runner.go:130] > # 	"containers_oom_total",
	I0829 19:01:16.352882   50033 command_runner.go:130] > # 	"containers_oom",
	I0829 19:01:16.352886   50033 command_runner.go:130] > # 	"processes_defunct",
	I0829 19:01:16.352892   50033 command_runner.go:130] > # 	"operations_total",
	I0829 19:01:16.352896   50033 command_runner.go:130] > # 	"operations_latency_seconds",
	I0829 19:01:16.352902   50033 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0829 19:01:16.352906   50033 command_runner.go:130] > # 	"operations_errors_total",
	I0829 19:01:16.352912   50033 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0829 19:01:16.352916   50033 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0829 19:01:16.352922   50033 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0829 19:01:16.352926   50033 command_runner.go:130] > # 	"image_pulls_success_total",
	I0829 19:01:16.352932   50033 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0829 19:01:16.352936   50033 command_runner.go:130] > # 	"containers_oom_count_total",
	I0829 19:01:16.352943   50033 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0829 19:01:16.352947   50033 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0829 19:01:16.352954   50033 command_runner.go:130] > # ]
	I0829 19:01:16.352960   50033 command_runner.go:130] > # The port on which the metrics server will listen.
	I0829 19:01:16.352966   50033 command_runner.go:130] > # metrics_port = 9090
	I0829 19:01:16.352972   50033 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0829 19:01:16.352977   50033 command_runner.go:130] > # metrics_socket = ""
	I0829 19:01:16.352982   50033 command_runner.go:130] > # The certificate for the secure metrics server.
	I0829 19:01:16.352990   50033 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0829 19:01:16.352996   50033 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0829 19:01:16.353003   50033 command_runner.go:130] > # certificate on any modification event.
	I0829 19:01:16.353006   50033 command_runner.go:130] > # metrics_cert = ""
	I0829 19:01:16.353011   50033 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0829 19:01:16.353018   50033 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0829 19:01:16.353022   50033 command_runner.go:130] > # metrics_key = ""
	I0829 19:01:16.353033   50033 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0829 19:01:16.353039   50033 command_runner.go:130] > [crio.tracing]
	I0829 19:01:16.353044   50033 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0829 19:01:16.353050   50033 command_runner.go:130] > # enable_tracing = false
	I0829 19:01:16.353058   50033 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0829 19:01:16.353064   50033 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0829 19:01:16.353070   50033 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0829 19:01:16.353077   50033 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0829 19:01:16.353081   50033 command_runner.go:130] > # CRI-O NRI configuration.
	I0829 19:01:16.353087   50033 command_runner.go:130] > [crio.nri]
	I0829 19:01:16.353091   50033 command_runner.go:130] > # Globally enable or disable NRI.
	I0829 19:01:16.353095   50033 command_runner.go:130] > # enable_nri = false
	I0829 19:01:16.353099   50033 command_runner.go:130] > # NRI socket to listen on.
	I0829 19:01:16.353107   50033 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0829 19:01:16.353113   50033 command_runner.go:130] > # NRI plugin directory to use.
	I0829 19:01:16.353124   50033 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0829 19:01:16.353133   50033 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0829 19:01:16.353140   50033 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0829 19:01:16.353145   50033 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0829 19:01:16.353152   50033 command_runner.go:130] > # nri_disable_connections = false
	I0829 19:01:16.353157   50033 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0829 19:01:16.353164   50033 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0829 19:01:16.353169   50033 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0829 19:01:16.353175   50033 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0829 19:01:16.353181   50033 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0829 19:01:16.353186   50033 command_runner.go:130] > [crio.stats]
	I0829 19:01:16.353192   50033 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0829 19:01:16.353199   50033 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0829 19:01:16.353205   50033 command_runner.go:130] > # stats_collection_period = 0
	I0829 19:01:16.353228   50033 command_runner.go:130] ! time="2024-08-29 19:01:16.312144775Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0829 19:01:16.353250   50033 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0829 19:01:16.353395   50033 cni.go:84] Creating CNI manager for ""
	I0829 19:01:16.353414   50033 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0829 19:01:16.353440   50033 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:01:16.353473   50033 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.171 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-922931 NodeName:multinode-922931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:01:16.353667   50033 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-922931"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:01:16.353739   50033 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:01:16.363176   50033 command_runner.go:130] > kubeadm
	I0829 19:01:16.363190   50033 command_runner.go:130] > kubectl
	I0829 19:01:16.363195   50033 command_runner.go:130] > kubelet
	I0829 19:01:16.363215   50033 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:01:16.363269   50033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:01:16.372175   50033 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0829 19:01:16.387722   50033 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:01:16.402868   50033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0829 19:01:16.417844   50033 ssh_runner.go:195] Run: grep 192.168.39.171	control-plane.minikube.internal$ /etc/hosts
	I0829 19:01:16.421328   50033 command_runner.go:130] > 192.168.39.171	control-plane.minikube.internal
	I0829 19:01:16.421458   50033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:01:16.562663   50033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:01:16.576752   50033 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931 for IP: 192.168.39.171
	I0829 19:01:16.576774   50033 certs.go:194] generating shared ca certs ...
	I0829 19:01:16.576800   50033 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:01:16.576968   50033 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:01:16.577021   50033 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:01:16.577035   50033 certs.go:256] generating profile certs ...
	I0829 19:01:16.577187   50033 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/client.key
	I0829 19:01:16.577244   50033 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/apiserver.key.a63428f4
	I0829 19:01:16.577274   50033 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/proxy-client.key
	I0829 19:01:16.577282   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 19:01:16.577293   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 19:01:16.577310   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 19:01:16.577322   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 19:01:16.577340   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 19:01:16.577355   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 19:01:16.577369   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 19:01:16.577378   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 19:01:16.577441   50033 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:01:16.577482   50033 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:01:16.577497   50033 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:01:16.577524   50033 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:01:16.577550   50033 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:01:16.577583   50033 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:01:16.577643   50033 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:01:16.577675   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /usr/share/ca-certificates/202592.pem
	I0829 19:01:16.577696   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:01:16.577715   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem -> /usr/share/ca-certificates/20259.pem
	I0829 19:01:16.578382   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:01:16.601162   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:01:16.624283   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:01:16.646958   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:01:16.668846   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:01:16.691137   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:01:16.712989   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:01:16.736142   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:01:16.758429   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:01:16.780429   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:01:16.802946   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:01:16.824359   50033 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:01:16.839667   50033 ssh_runner.go:195] Run: openssl version
	I0829 19:01:16.845224   50033 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0829 19:01:16.845284   50033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:01:16.855340   50033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:01:16.859379   50033 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:01:16.859471   50033 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:01:16.859534   50033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:01:16.864634   50033 command_runner.go:130] > b5213941
	I0829 19:01:16.864811   50033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:01:16.875614   50033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:01:16.886281   50033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:01:16.890536   50033 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:01:16.890568   50033 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:01:16.890633   50033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:01:16.896398   50033 command_runner.go:130] > 51391683
	I0829 19:01:16.896565   50033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:01:16.906370   50033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:01:16.918304   50033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:01:16.923130   50033 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:01:16.923157   50033 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:01:16.923214   50033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:01:16.929297   50033 command_runner.go:130] > 3ec20f2e
	I0829 19:01:16.929387   50033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:01:16.940746   50033 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:01:16.945381   50033 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:01:16.945411   50033 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0829 19:01:16.945421   50033 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0829 19:01:16.945430   50033 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0829 19:01:16.945438   50033 command_runner.go:130] > Access: 2024-08-29 18:54:26.787101614 +0000
	I0829 19:01:16.945447   50033 command_runner.go:130] > Modify: 2024-08-29 18:54:26.787101614 +0000
	I0829 19:01:16.945455   50033 command_runner.go:130] > Change: 2024-08-29 18:54:26.787101614 +0000
	I0829 19:01:16.945462   50033 command_runner.go:130] >  Birth: 2024-08-29 18:54:26.787101614 +0000
	I0829 19:01:16.945549   50033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:01:16.951058   50033 command_runner.go:130] > Certificate will not expire
	I0829 19:01:16.951229   50033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:01:16.956737   50033 command_runner.go:130] > Certificate will not expire
	I0829 19:01:16.956847   50033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:01:16.962074   50033 command_runner.go:130] > Certificate will not expire
	I0829 19:01:16.962167   50033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:01:16.967444   50033 command_runner.go:130] > Certificate will not expire
	I0829 19:01:16.967502   50033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:01:16.972638   50033 command_runner.go:130] > Certificate will not expire
	I0829 19:01:16.972697   50033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:01:16.977738   50033 command_runner.go:130] > Certificate will not expire
	I0829 19:01:16.977912   50033 kubeadm.go:392] StartCluster: {Name:multinode-922931 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-922931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.226 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:01:16.978012   50033 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:01:16.978069   50033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:01:17.014228   50033 command_runner.go:130] > e9e301ed91cbcd41c9e71c1aa03757e92221389ca0a1c9c52fd306b1250a21e0
	I0829 19:01:17.014253   50033 command_runner.go:130] > 621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655
	I0829 19:01:17.014259   50033 command_runner.go:130] > 04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba
	I0829 19:01:17.014266   50033 command_runner.go:130] > f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626
	I0829 19:01:17.014271   50033 command_runner.go:130] > 629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632
	I0829 19:01:17.014277   50033 command_runner.go:130] > 4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2
	I0829 19:01:17.014282   50033 command_runner.go:130] > 7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547
	I0829 19:01:17.014291   50033 command_runner.go:130] > 03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e
	I0829 19:01:17.014308   50033 cri.go:89] found id: "e9e301ed91cbcd41c9e71c1aa03757e92221389ca0a1c9c52fd306b1250a21e0"
	I0829 19:01:17.014315   50033 cri.go:89] found id: "621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655"
	I0829 19:01:17.014318   50033 cri.go:89] found id: "04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba"
	I0829 19:01:17.014321   50033 cri.go:89] found id: "f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626"
	I0829 19:01:17.014324   50033 cri.go:89] found id: "629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632"
	I0829 19:01:17.014328   50033 cri.go:89] found id: "4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2"
	I0829 19:01:17.014334   50033 cri.go:89] found id: "7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547"
	I0829 19:01:17.014337   50033 cri.go:89] found id: "03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e"
	I0829 19:01:17.014340   50033 cri.go:89] found id: ""
	I0829 19:01:17.014377   50033 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.881677911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958183881653613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b37ce7d-68b9-4f3b-9739-e9e83f0d73ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.882500735Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87989d75-ace1-4049-a480-a86a4f33f8a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.882555279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87989d75-ace1-4049-a480-a86a4f33f8a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.882894628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c6c25057b1e25519e64591d4656700d0319638d73916ddc9a1b94f268feb8d8,PodSandboxId:1bd222c5d9be5f32c59a8bf6e60f4a3c8c4aa7193250c819edf8fa4a44236975,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724958117072932138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51cf7b4a2b4277b833fddb33fd5bca910084eabbfd9fab545d3564e743702116,PodSandboxId:5a86fb7a38c786dda92b56c631932295c47f14e17ffb1580de17b5655c7ac294,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724958083503615280,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5f3cfcde3f89eb9823227944688b86c9781b7a2b5735717466999fe3596038,PodSandboxId:695f41f3f98e025279dc8ecaa7f2403c8aef2f18ff1ad2f199892683305502bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724958083389773593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8aee643ba50193e37851993c8871e41a13c5fab876e4ca1ea3c17bf44c3ef94,PodSandboxId:63f0b32e2107d6ea3fa065ddb87b48ec77fc19d7bc8bf305b3c3475907691cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724958083335582882,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cdf127e95844cf2235e2cacadfdbcfff140ba972d78bf0d7c48956194fe77c0,PodSandboxId:414052dff31ade6682e4ed2f1626d4d228b0c4bab5294532a3daff3d63f29867,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724958083319989504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99afe537efacfbfbd23e4a327a7839185834c590327290ee8f584b802064b857,PodSandboxId:ce2b22b804b80cd5dae53b67442ac02f8500e8ea12cdec571232da157b3ac936,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724958079525231197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08466cf1de50ca3a78f8fc03c118b634b73459a82a36469ee77808d8b83164ad,PodSandboxId:e781e93e250a05870f040ea4e424d861e10a96f06f693186b3a7a112b8bd509d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724958079491386105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b860fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6a4434171610963f1f43e580883fe71e0d47f058737130fe8ae970e0cf41e6,PodSandboxId:3bcc04bcf1ef3e323eebcf5dbd3c844c2e9f8fb3516f6c1f308d57da0f763bb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724958079506528581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd16207ad3b78365f5336ee8ac71b51672477a007a026be84553bb048a74af5e,PodSandboxId:eed4a8d43471ac84afb8f24e557355ce3973a063b76a45bcb8bb99e3fe443867,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724958079486879754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f12adc01e898985706f5ce770d0c2f094f7ac33f8994e5174acac97c7279fe,PodSandboxId:64b9f8a3ebc854154b624612916df26cdd3c7de4ac6a42e4d2fc8374c985fd3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724957755929337620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e301ed91cbcd41c9e71c1aa03757e92221389ca0a1c9c52fd306b1250a21e0,PodSandboxId:942a26718b07f5a7975c0e889c247bdefc2b6795d982e46240d01598c7c1c8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724957697069238738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655,PodSandboxId:c99b505d069c472ea587d06bc6d260c286e5ba26167704fa68c753da43ba4cb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724957697044806289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba,PodSandboxId:1253fca3a9769def91dcb35aef9aa1eb2f6e52affdaf79fa683ca80a143eb11a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724957685330788894,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626,PodSandboxId:93f539efa4b7e1bad827dcf9efb521fd1e0cf9a4a9ed203d2af34e459e5389eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724957682444206159,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632,PodSandboxId:d196c525bb040c3e427c1ebed44e72a34b1ea2bb3cce423716d196ba57ddd5d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724957669976984817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
60fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2,PodSandboxId:38fc47d5fa2711024481449306414944648c7d64ffdb89b6ac93982586f74de8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724957669948698948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547,PodSandboxId:cfbf9fbe56462978abd3c5bd244a6ae220884a9010324204206d3d9ed9055134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724957669927613181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e,PodSandboxId:0b4951ffcb41e4f9e95142133bdc615cf7aba52eca2e75882c6491b9ee24db88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724957669881879392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87989d75-ace1-4049-a480-a86a4f33f8a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.898259965Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=25ad33b8-5202-4811-ac09-6a5376c1d4b9 name=/runtime.v1.RuntimeService/Status
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.898341792Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=25ad33b8-5202-4811-ac09-6a5376c1d4b9 name=/runtime.v1.RuntimeService/Status
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.910940818Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a5f3a655-68f6-40dd-b7ef-33343b5e15c1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.911380452Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1bd222c5d9be5f32c59a8bf6e60f4a3c8c4aa7193250c819edf8fa4a44236975,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-9dk5v,Uid:c948c1ad-9ddf-4518-82e8-2bddad735667,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724958116930572513,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:01:22.786899044Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:695f41f3f98e025279dc8ecaa7f2403c8aef2f18ff1ad2f199892683305502bc,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-m5hh2,Uid:2d37f71c-00b6-4725-8b5e-8014993dd057,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1724958083153043498,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:01:22.786891786Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a86fb7a38c786dda92b56c631932295c47f14e17ffb1580de17b5655c7ac294,Metadata:&PodSandboxMetadata{Name:kindnet-xt8rz,Uid:39ad8429-f82d-40b2-9d5a-f9fd4f36f525,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724958083120666294,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-08-29T19:01:22.786901376Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:414052dff31ade6682e4ed2f1626d4d228b0c4bab5294532a3daff3d63f29867,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7f61f623-598d-49a4-96f3-e8458a94432d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724958083119912844,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{
\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-29T19:01:22.786886066Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63f0b32e2107d6ea3fa065ddb87b48ec77fc19d7bc8bf305b3c3475907691cb2,Metadata:&PodSandboxMetadata{Name:kube-proxy-flq24,Uid:62880a62-5e17-4fe0-973c-26fc94f0fea2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724958083115993029,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:01:22.786895059Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce2b22b804b80cd5dae53b67442ac02f8500e8ea12cdec571232da157b3ac936,Metadata:&PodSandboxMetadata{Name:etcd-multinode-922931,Uid:a8389bb6da7de24f38ae42727e6c12a6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724958079322379669,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.171:2379,kubernetes.io/config.hash: a8389bb6da7de24f38ae42727e6c12a6,kubernetes.io/config.seen: 2024-08-29T19:01:18.812607541Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3bcc04bcf1ef3e323eebcf5dbd3c844c2e9f8fb3516f6c1f308d57da0f763bb0,Metada
ta:&PodSandboxMetadata{Name:kube-apiserver-multinode-922931,Uid:cc99793b364391e874a58acd0561e338,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724958079317861126,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.171:8443,kubernetes.io/config.hash: cc99793b364391e874a58acd0561e338,kubernetes.io/config.seen: 2024-08-29T19:01:18.812612075Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e781e93e250a05870f040ea4e424d861e10a96f06f693186b3a7a112b8bd509d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-922931,Uid:b860fc526ebadf25d5ed9ab3a571a081,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724958079312468421,Labels:map[string]string{comp
onent: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b860fc526ebadf25d5ed9ab3a571a081,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b860fc526ebadf25d5ed9ab3a571a081,kubernetes.io/config.seen: 2024-08-29T19:01:18.812614563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eed4a8d43471ac84afb8f24e557355ce3973a063b76a45bcb8bb99e3fe443867,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-922931,Uid:a0e67e8e32ee1e5831bbef69ea38a32d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724958079311731119,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,tier: control-plane,},Annotations:map[string]string{kuberne
tes.io/config.hash: a0e67e8e32ee1e5831bbef69ea38a32d,kubernetes.io/config.seen: 2024-08-29T19:01:18.812613418Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64b9f8a3ebc854154b624612916df26cdd3c7de4ac6a42e4d2fc8374c985fd3a,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-9dk5v,Uid:c948c1ad-9ddf-4518-82e8-2bddad735667,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724957752371816835,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:55:52.061203945Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:942a26718b07f5a7975c0e889c247bdefc2b6795d982e46240d01598c7c1c8cc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7f61f623-598d-49a4-96f3-e8458a94432d,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1724957696873382682,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-29T18:54:56.565230695Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c99b505d069c472ea587d06bc6d260c286e5ba26167704fa68c753da43ba4cb6,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-m5hh2,Uid:2d37f71c-00b6-4725-8b5e-8014993dd057,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724957696868003926,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:54:56.562430942Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:93f539efa4b7e1bad827dcf9efb521fd1e0cf9a4a9ed203d2af34e459e5389eb,Metadata:&PodSandboxMetadata{Name:kube-proxy-flq24,Uid:62880a62-5e17-4fe0-973c-26fc94f0fea2,Namespace:kube-
system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724957682362809911,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:54:40.557339044Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1253fca3a9769def91dcb35aef9aa1eb2f6e52affdaf79fa683ca80a143eb11a,Metadata:&PodSandboxMetadata{Name:kindnet-xt8rz,Uid:39ad8429-f82d-40b2-9d5a-f9fd4f36f525,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724957681759140808,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,k8s-app: kindnet,pod
-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T18:54:40.552371460Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b4951ffcb41e4f9e95142133bdc615cf7aba52eca2e75882c6491b9ee24db88,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-922931,Uid:a0e67e8e32ee1e5831bbef69ea38a32d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724957669749487697,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a0e67e8e32ee1e5831bbef69ea38a32d,kubernetes.io/config.seen: 2024-08-29T18:54:29.286143854Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d196c525bb040c3e427c1ebed44e72a34b1ea2bb3cce423716d196ba57ddd5d3,Metadat
a:&PodSandboxMetadata{Name:kube-scheduler-multinode-922931,Uid:b860fc526ebadf25d5ed9ab3a571a081,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724957669746779309,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b860fc526ebadf25d5ed9ab3a571a081,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b860fc526ebadf25d5ed9ab3a571a081,kubernetes.io/config.seen: 2024-08-29T18:54:29.286144873Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cfbf9fbe56462978abd3c5bd244a6ae220884a9010324204206d3d9ed9055134,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-922931,Uid:cc99793b364391e874a58acd0561e338,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724957669744369913,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name:
kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.171:8443,kubernetes.io/config.hash: cc99793b364391e874a58acd0561e338,kubernetes.io/config.seen: 2024-08-29T18:54:29.286142702Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:38fc47d5fa2711024481449306414944648c7d64ffdb89b6ac93982586f74de8,Metadata:&PodSandboxMetadata{Name:etcd-multinode-922931,Uid:a8389bb6da7de24f38ae42727e6c12a6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724957669738010242,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https:
//192.168.39.171:2379,kubernetes.io/config.hash: a8389bb6da7de24f38ae42727e6c12a6,kubernetes.io/config.seen: 2024-08-29T18:54:29.286138558Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a5f3a655-68f6-40dd-b7ef-33343b5e15c1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.912271569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e1efb72-4852-4331-9eb4-1b38f6d1b296 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.912329311Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e1efb72-4852-4331-9eb4-1b38f6d1b296 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.912688364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c6c25057b1e25519e64591d4656700d0319638d73916ddc9a1b94f268feb8d8,PodSandboxId:1bd222c5d9be5f32c59a8bf6e60f4a3c8c4aa7193250c819edf8fa4a44236975,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724958117072932138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51cf7b4a2b4277b833fddb33fd5bca910084eabbfd9fab545d3564e743702116,PodSandboxId:5a86fb7a38c786dda92b56c631932295c47f14e17ffb1580de17b5655c7ac294,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724958083503615280,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5f3cfcde3f89eb9823227944688b86c9781b7a2b5735717466999fe3596038,PodSandboxId:695f41f3f98e025279dc8ecaa7f2403c8aef2f18ff1ad2f199892683305502bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724958083389773593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8aee643ba50193e37851993c8871e41a13c5fab876e4ca1ea3c17bf44c3ef94,PodSandboxId:63f0b32e2107d6ea3fa065ddb87b48ec77fc19d7bc8bf305b3c3475907691cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724958083335582882,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cdf127e95844cf2235e2cacadfdbcfff140ba972d78bf0d7c48956194fe77c0,PodSandboxId:414052dff31ade6682e4ed2f1626d4d228b0c4bab5294532a3daff3d63f29867,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724958083319989504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99afe537efacfbfbd23e4a327a7839185834c590327290ee8f584b802064b857,PodSandboxId:ce2b22b804b80cd5dae53b67442ac02f8500e8ea12cdec571232da157b3ac936,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724958079525231197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08466cf1de50ca3a78f8fc03c118b634b73459a82a36469ee77808d8b83164ad,PodSandboxId:e781e93e250a05870f040ea4e424d861e10a96f06f693186b3a7a112b8bd509d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724958079491386105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b860fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6a4434171610963f1f43e580883fe71e0d47f058737130fe8ae970e0cf41e6,PodSandboxId:3bcc04bcf1ef3e323eebcf5dbd3c844c2e9f8fb3516f6c1f308d57da0f763bb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724958079506528581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd16207ad3b78365f5336ee8ac71b51672477a007a026be84553bb048a74af5e,PodSandboxId:eed4a8d43471ac84afb8f24e557355ce3973a063b76a45bcb8bb99e3fe443867,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724958079486879754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f12adc01e898985706f5ce770d0c2f094f7ac33f8994e5174acac97c7279fe,PodSandboxId:64b9f8a3ebc854154b624612916df26cdd3c7de4ac6a42e4d2fc8374c985fd3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724957755929337620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e301ed91cbcd41c9e71c1aa03757e92221389ca0a1c9c52fd306b1250a21e0,PodSandboxId:942a26718b07f5a7975c0e889c247bdefc2b6795d982e46240d01598c7c1c8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724957697069238738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655,PodSandboxId:c99b505d069c472ea587d06bc6d260c286e5ba26167704fa68c753da43ba4cb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724957697044806289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba,PodSandboxId:1253fca3a9769def91dcb35aef9aa1eb2f6e52affdaf79fa683ca80a143eb11a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724957685330788894,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626,PodSandboxId:93f539efa4b7e1bad827dcf9efb521fd1e0cf9a4a9ed203d2af34e459e5389eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724957682444206159,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632,PodSandboxId:d196c525bb040c3e427c1ebed44e72a34b1ea2bb3cce423716d196ba57ddd5d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724957669976984817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
60fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2,PodSandboxId:38fc47d5fa2711024481449306414944648c7d64ffdb89b6ac93982586f74de8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724957669948698948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547,PodSandboxId:cfbf9fbe56462978abd3c5bd244a6ae220884a9010324204206d3d9ed9055134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724957669927613181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e,PodSandboxId:0b4951ffcb41e4f9e95142133bdc615cf7aba52eca2e75882c6491b9ee24db88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724957669881879392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e1efb72-4852-4331-9eb4-1b38f6d1b296 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.929476423Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=050a3318-b4c5-41da-914d-02af4e2c728f name=/runtime.v1.RuntimeService/Version
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.929546235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=050a3318-b4c5-41da-914d-02af4e2c728f name=/runtime.v1.RuntimeService/Version
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.930460252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38f60300-f036-4c05-ad21-e8e3604f1d74 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.930863728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958183930840821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38f60300-f036-4c05-ad21-e8e3604f1d74 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.931338332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44f70498-f214-443d-8b43-4461099e02f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.931389283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44f70498-f214-443d-8b43-4461099e02f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.931702001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c6c25057b1e25519e64591d4656700d0319638d73916ddc9a1b94f268feb8d8,PodSandboxId:1bd222c5d9be5f32c59a8bf6e60f4a3c8c4aa7193250c819edf8fa4a44236975,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724958117072932138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51cf7b4a2b4277b833fddb33fd5bca910084eabbfd9fab545d3564e743702116,PodSandboxId:5a86fb7a38c786dda92b56c631932295c47f14e17ffb1580de17b5655c7ac294,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724958083503615280,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5f3cfcde3f89eb9823227944688b86c9781b7a2b5735717466999fe3596038,PodSandboxId:695f41f3f98e025279dc8ecaa7f2403c8aef2f18ff1ad2f199892683305502bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724958083389773593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8aee643ba50193e37851993c8871e41a13c5fab876e4ca1ea3c17bf44c3ef94,PodSandboxId:63f0b32e2107d6ea3fa065ddb87b48ec77fc19d7bc8bf305b3c3475907691cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724958083335582882,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cdf127e95844cf2235e2cacadfdbcfff140ba972d78bf0d7c48956194fe77c0,PodSandboxId:414052dff31ade6682e4ed2f1626d4d228b0c4bab5294532a3daff3d63f29867,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724958083319989504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99afe537efacfbfbd23e4a327a7839185834c590327290ee8f584b802064b857,PodSandboxId:ce2b22b804b80cd5dae53b67442ac02f8500e8ea12cdec571232da157b3ac936,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724958079525231197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08466cf1de50ca3a78f8fc03c118b634b73459a82a36469ee77808d8b83164ad,PodSandboxId:e781e93e250a05870f040ea4e424d861e10a96f06f693186b3a7a112b8bd509d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724958079491386105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b860fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6a4434171610963f1f43e580883fe71e0d47f058737130fe8ae970e0cf41e6,PodSandboxId:3bcc04bcf1ef3e323eebcf5dbd3c844c2e9f8fb3516f6c1f308d57da0f763bb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724958079506528581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd16207ad3b78365f5336ee8ac71b51672477a007a026be84553bb048a74af5e,PodSandboxId:eed4a8d43471ac84afb8f24e557355ce3973a063b76a45bcb8bb99e3fe443867,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724958079486879754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f12adc01e898985706f5ce770d0c2f094f7ac33f8994e5174acac97c7279fe,PodSandboxId:64b9f8a3ebc854154b624612916df26cdd3c7de4ac6a42e4d2fc8374c985fd3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724957755929337620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e301ed91cbcd41c9e71c1aa03757e92221389ca0a1c9c52fd306b1250a21e0,PodSandboxId:942a26718b07f5a7975c0e889c247bdefc2b6795d982e46240d01598c7c1c8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724957697069238738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655,PodSandboxId:c99b505d069c472ea587d06bc6d260c286e5ba26167704fa68c753da43ba4cb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724957697044806289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba,PodSandboxId:1253fca3a9769def91dcb35aef9aa1eb2f6e52affdaf79fa683ca80a143eb11a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724957685330788894,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626,PodSandboxId:93f539efa4b7e1bad827dcf9efb521fd1e0cf9a4a9ed203d2af34e459e5389eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724957682444206159,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632,PodSandboxId:d196c525bb040c3e427c1ebed44e72a34b1ea2bb3cce423716d196ba57ddd5d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724957669976984817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
60fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2,PodSandboxId:38fc47d5fa2711024481449306414944648c7d64ffdb89b6ac93982586f74de8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724957669948698948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547,PodSandboxId:cfbf9fbe56462978abd3c5bd244a6ae220884a9010324204206d3d9ed9055134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724957669927613181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e,PodSandboxId:0b4951ffcb41e4f9e95142133bdc615cf7aba52eca2e75882c6491b9ee24db88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724957669881879392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44f70498-f214-443d-8b43-4461099e02f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.970454919Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=045c3786-3a82-4d4e-b92e-f86a3c713eec name=/runtime.v1.RuntimeService/Version
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.970530833Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=045c3786-3a82-4d4e-b92e-f86a3c713eec name=/runtime.v1.RuntimeService/Version
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.971622968Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b04b2c5-76e0-4a79-82ff-ddd9fede6a07 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.972288407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958183972051168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b04b2c5-76e0-4a79-82ff-ddd9fede6a07 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.972764956Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36c0d3a9-eb5e-44c6-9ece-348fdbb0e07b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.972823661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36c0d3a9-eb5e-44c6-9ece-348fdbb0e07b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:03:03 multinode-922931 crio[2716]: time="2024-08-29 19:03:03.973201351Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c6c25057b1e25519e64591d4656700d0319638d73916ddc9a1b94f268feb8d8,PodSandboxId:1bd222c5d9be5f32c59a8bf6e60f4a3c8c4aa7193250c819edf8fa4a44236975,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724958117072932138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51cf7b4a2b4277b833fddb33fd5bca910084eabbfd9fab545d3564e743702116,PodSandboxId:5a86fb7a38c786dda92b56c631932295c47f14e17ffb1580de17b5655c7ac294,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724958083503615280,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5f3cfcde3f89eb9823227944688b86c9781b7a2b5735717466999fe3596038,PodSandboxId:695f41f3f98e025279dc8ecaa7f2403c8aef2f18ff1ad2f199892683305502bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724958083389773593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8aee643ba50193e37851993c8871e41a13c5fab876e4ca1ea3c17bf44c3ef94,PodSandboxId:63f0b32e2107d6ea3fa065ddb87b48ec77fc19d7bc8bf305b3c3475907691cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724958083335582882,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cdf127e95844cf2235e2cacadfdbcfff140ba972d78bf0d7c48956194fe77c0,PodSandboxId:414052dff31ade6682e4ed2f1626d4d228b0c4bab5294532a3daff3d63f29867,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724958083319989504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99afe537efacfbfbd23e4a327a7839185834c590327290ee8f584b802064b857,PodSandboxId:ce2b22b804b80cd5dae53b67442ac02f8500e8ea12cdec571232da157b3ac936,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724958079525231197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08466cf1de50ca3a78f8fc03c118b634b73459a82a36469ee77808d8b83164ad,PodSandboxId:e781e93e250a05870f040ea4e424d861e10a96f06f693186b3a7a112b8bd509d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724958079491386105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b860fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6a4434171610963f1f43e580883fe71e0d47f058737130fe8ae970e0cf41e6,PodSandboxId:3bcc04bcf1ef3e323eebcf5dbd3c844c2e9f8fb3516f6c1f308d57da0f763bb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724958079506528581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd16207ad3b78365f5336ee8ac71b51672477a007a026be84553bb048a74af5e,PodSandboxId:eed4a8d43471ac84afb8f24e557355ce3973a063b76a45bcb8bb99e3fe443867,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724958079486879754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f12adc01e898985706f5ce770d0c2f094f7ac33f8994e5174acac97c7279fe,PodSandboxId:64b9f8a3ebc854154b624612916df26cdd3c7de4ac6a42e4d2fc8374c985fd3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724957755929337620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e301ed91cbcd41c9e71c1aa03757e92221389ca0a1c9c52fd306b1250a21e0,PodSandboxId:942a26718b07f5a7975c0e889c247bdefc2b6795d982e46240d01598c7c1c8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724957697069238738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655,PodSandboxId:c99b505d069c472ea587d06bc6d260c286e5ba26167704fa68c753da43ba4cb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724957697044806289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba,PodSandboxId:1253fca3a9769def91dcb35aef9aa1eb2f6e52affdaf79fa683ca80a143eb11a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724957685330788894,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626,PodSandboxId:93f539efa4b7e1bad827dcf9efb521fd1e0cf9a4a9ed203d2af34e459e5389eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724957682444206159,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632,PodSandboxId:d196c525bb040c3e427c1ebed44e72a34b1ea2bb3cce423716d196ba57ddd5d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724957669976984817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
60fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2,PodSandboxId:38fc47d5fa2711024481449306414944648c7d64ffdb89b6ac93982586f74de8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724957669948698948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547,PodSandboxId:cfbf9fbe56462978abd3c5bd244a6ae220884a9010324204206d3d9ed9055134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724957669927613181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e,PodSandboxId:0b4951ffcb41e4f9e95142133bdc615cf7aba52eca2e75882c6491b9ee24db88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724957669881879392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36c0d3a9-eb5e-44c6-9ece-348fdbb0e07b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4c6c25057b1e2       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   1bd222c5d9be5       busybox-7dff88458-9dk5v
	51cf7b4a2b427       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   5a86fb7a38c78       kindnet-xt8rz
	2e5f3cfcde3f8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   695f41f3f98e0       coredns-6f6b679f8f-m5hh2
	b8aee643ba501       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   63f0b32e2107d       kube-proxy-flq24
	9cdf127e95844       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   414052dff31ad       storage-provisioner
	99afe537efacf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   ce2b22b804b80       etcd-multinode-922931
	bf6a443417161       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   3bcc04bcf1ef3       kube-apiserver-multinode-922931
	08466cf1de50c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   e781e93e250a0       kube-scheduler-multinode-922931
	cd16207ad3b78       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   eed4a8d43471a       kube-controller-manager-multinode-922931
	d6f12adc01e89       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   64b9f8a3ebc85       busybox-7dff88458-9dk5v
	e9e301ed91cbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   942a26718b07f       storage-provisioner
	621daeb85eedc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   c99b505d069c4       coredns-6f6b679f8f-m5hh2
	04ed982a9d246       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   1253fca3a9769       kindnet-xt8rz
	f0c82b2494ec0       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   93f539efa4b7e       kube-proxy-flq24
	629bd4d21adaa       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   d196c525bb040       kube-scheduler-multinode-922931
	4a17f1421a093       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   38fc47d5fa271       etcd-multinode-922931
	7867424ad4b04       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   cfbf9fbe56462       kube-apiserver-multinode-922931
	03ed977ad4a1d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   0b4951ffcb41e       kube-controller-manager-multinode-922931
	
	
	==> coredns [2e5f3cfcde3f89eb9823227944688b86c9781b7a2b5735717466999fe3596038] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60174 - 38567 "HINFO IN 7562962971531487601.846825696782145744. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010918715s
	
	
	==> coredns [621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655] <==
	[INFO] 10.244.1.2:55711 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00170629s
	[INFO] 10.244.1.2:56002 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090237s
	[INFO] 10.244.1.2:56224 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083937s
	[INFO] 10.244.1.2:47220 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00127978s
	[INFO] 10.244.1.2:55457 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059987s
	[INFO] 10.244.1.2:58382 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058695s
	[INFO] 10.244.1.2:33750 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055373s
	[INFO] 10.244.0.3:38554 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000085421s
	[INFO] 10.244.0.3:52406 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045679s
	[INFO] 10.244.0.3:56898 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000032164s
	[INFO] 10.244.0.3:60906 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000027701s
	[INFO] 10.244.1.2:38162 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127575s
	[INFO] 10.244.1.2:49352 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000253324s
	[INFO] 10.244.1.2:52799 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082134s
	[INFO] 10.244.1.2:42108 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072074s
	[INFO] 10.244.0.3:35094 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150276s
	[INFO] 10.244.0.3:45459 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128965s
	[INFO] 10.244.0.3:39657 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097045s
	[INFO] 10.244.0.3:49961 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129456s
	[INFO] 10.244.1.2:37634 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012354s
	[INFO] 10.244.1.2:33698 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000096537s
	[INFO] 10.244.1.2:59430 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000099446s
	[INFO] 10.244.1.2:56304 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069733s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-922931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-922931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=multinode-922931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_54_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:54:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-922931
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:03:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:01:22 +0000   Thu, 29 Aug 2024 18:54:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:01:22 +0000   Thu, 29 Aug 2024 18:54:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:01:22 +0000   Thu, 29 Aug 2024 18:54:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:01:22 +0000   Thu, 29 Aug 2024 18:54:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    multinode-922931
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 549624820dca455195f0e270dd2e4862
	  System UUID:                54962482-0dca-4551-95f0-e270dd2e4862
	  Boot ID:                    60f7d0bc-602e-4968-9053-47600fbbdc39
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9dk5v                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m12s
	  kube-system                 coredns-6f6b679f8f-m5hh2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m24s
	  kube-system                 etcd-multinode-922931                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m31s
	  kube-system                 kindnet-xt8rz                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m24s
	  kube-system                 kube-apiserver-multinode-922931             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-controller-manager-multinode-922931    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-proxy-flq24                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 kube-scheduler-multinode-922931             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m21s                  kube-proxy       
	  Normal  Starting                 100s                   kube-proxy       
	  Normal  Starting                 8m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m35s (x8 over 8m35s)  kubelet          Node multinode-922931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s (x8 over 8m35s)  kubelet          Node multinode-922931 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s (x7 over 8m35s)  kubelet          Node multinode-922931 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m29s                  kubelet          Node multinode-922931 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m29s                  kubelet          Node multinode-922931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m29s                  kubelet          Node multinode-922931 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m29s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m25s                  node-controller  Node multinode-922931 event: Registered Node multinode-922931 in Controller
	  Normal  NodeReady                8m8s                   kubelet          Node multinode-922931 status is now: NodeReady
	  Normal  Starting                 106s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s (x8 over 106s)    kubelet          Node multinode-922931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 106s)    kubelet          Node multinode-922931 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 106s)    kubelet          Node multinode-922931 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           98s                    node-controller  Node multinode-922931 event: Registered Node multinode-922931 in Controller
	
	
	Name:               multinode-922931-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-922931-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=multinode-922931
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_02_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:02:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-922931-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:02:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:02:35 +0000   Thu, 29 Aug 2024 19:02:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:02:35 +0000   Thu, 29 Aug 2024 19:02:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:02:35 +0000   Thu, 29 Aug 2024 19:02:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:02:35 +0000   Thu, 29 Aug 2024 19:02:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    multinode-922931-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b7310b531df4a21aff9008f8b255f25
	  System UUID:                1b7310b5-31df-4a21-aff9-008f8b255f25
	  Boot ID:                    fcbcc64b-1873-4bf4-9ca3-c44e7ea8d40b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p68kf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kindnet-6qfwv              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m35s
	  kube-system                 kube-proxy-qwdcr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m29s                  kube-proxy  
	  Normal  Starting                 55s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m35s (x2 over 7m36s)  kubelet     Node multinode-922931-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s (x2 over 7m36s)  kubelet     Node multinode-922931-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s (x2 over 7m36s)  kubelet     Node multinode-922931-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m15s                  kubelet     Node multinode-922931-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet     Node multinode-922931-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet     Node multinode-922931-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet     Node multinode-922931-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                40s                    kubelet     Node multinode-922931-m02 status is now: NodeReady
	
	
	Name:               multinode-922931-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-922931-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=multinode-922931
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_02_43_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:02:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-922931-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:03:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:03:01 +0000   Thu, 29 Aug 2024 19:02:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:03:01 +0000   Thu, 29 Aug 2024 19:02:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:03:01 +0000   Thu, 29 Aug 2024 19:02:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:03:01 +0000   Thu, 29 Aug 2024 19:03:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    multinode-922931-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 02f263be194548de9e862f8b5a663367
	  System UUID:                02f263be-1945-48de-9e86-2f8b5a663367
	  Boot ID:                    a6a16883-f986-402f-beb6-c29be543dc52
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fjbnr       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m40s
	  kube-system                 kube-proxy-z7svl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m36s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m47s                  kube-proxy  
	  Normal  NodeAllocatableEnforced  6m41s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m40s (x2 over 6m41s)  kubelet     Node multinode-922931-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x2 over 6m41s)  kubelet     Node multinode-922931-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s (x2 over 6m41s)  kubelet     Node multinode-922931-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m21s                  kubelet     Node multinode-922931-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m52s (x2 over 5m52s)  kubelet     Node multinode-922931-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m52s)  kubelet     Node multinode-922931-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m52s (x2 over 5m52s)  kubelet     Node multinode-922931-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m32s                  kubelet     Node multinode-922931-m03 status is now: NodeReady
	  Normal  Starting                 22s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-922931-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-922931-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-922931-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-922931-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.054877] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.162387] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.131491] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.245873] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.795028] systemd-fstab-generator[742]: Ignoring "noauto" option for root device
	[  +2.906864] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +0.062400] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.935676] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.089844] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.568964] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +1.634074] kauditd_printk_skb: 46 callbacks suppressed
	[ +15.121467] kauditd_printk_skb: 41 callbacks suppressed
	[Aug29 18:55] kauditd_printk_skb: 12 callbacks suppressed
	[Aug29 19:01] systemd-fstab-generator[2639]: Ignoring "noauto" option for root device
	[  +0.156212] systemd-fstab-generator[2652]: Ignoring "noauto" option for root device
	[  +0.168713] systemd-fstab-generator[2666]: Ignoring "noauto" option for root device
	[  +0.137089] systemd-fstab-generator[2678]: Ignoring "noauto" option for root device
	[  +0.268022] systemd-fstab-generator[2706]: Ignoring "noauto" option for root device
	[  +6.537749] systemd-fstab-generator[2802]: Ignoring "noauto" option for root device
	[  +0.091561] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.049287] systemd-fstab-generator[2923]: Ignoring "noauto" option for root device
	[  +4.650330] kauditd_printk_skb: 74 callbacks suppressed
	[ +16.272570] systemd-fstab-generator[3772]: Ignoring "noauto" option for root device
	[  +0.092373] kauditd_printk_skb: 36 callbacks suppressed
	[ +17.409124] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2] <==
	{"level":"info","ts":"2024-08-29T18:55:29.155348Z","caller":"traceutil/trace.go:171","msg":"trace[748270603] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"221.990226ms","start":"2024-08-29T18:55:28.933348Z","end":"2024-08-29T18:55:29.155339Z","steps":["trace[748270603] 'process raft request'  (duration: 215.112205ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:55:29.155507Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.914224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-922931-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:55:29.155562Z","caller":"traceutil/trace.go:171","msg":"trace[246129906] range","detail":"{range_begin:/registry/minions/multinode-922931-m02; range_end:; response_count:0; response_revision:477; }","duration":"147.975071ms","start":"2024-08-29T18:55:29.007578Z","end":"2024-08-29T18:55:29.155553Z","steps":["trace[246129906] 'agreement among raft nodes before linearized reading'  (duration: 147.862084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:56:23.946964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.431227ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10610359361295044568 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-922931-m03.17f047f70d85ee72\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-922931-m03.17f047f70d85ee72\" value_size:642 lease:1386987324440268341 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-29T18:56:23.947369Z","caller":"traceutil/trace.go:171","msg":"trace[1451279462] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"227.333678ms","start":"2024-08-29T18:56:23.720008Z","end":"2024-08-29T18:56:23.947341Z","steps":["trace[1451279462] 'process raft request'  (duration: 71.342509ms)","trace[1451279462] 'compare'  (duration: 155.289496ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:56:27.776049Z","caller":"traceutil/trace.go:171","msg":"trace[131885823] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"126.250212ms","start":"2024-08-29T18:56:27.649785Z","end":"2024-08-29T18:56:27.776036Z","steps":["trace[131885823] 'process raft request'  (duration: 126.157685ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:56:27.840296Z","caller":"traceutil/trace.go:171","msg":"trace[810940815] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"187.691507ms","start":"2024-08-29T18:56:27.652591Z","end":"2024-08-29T18:56:27.840283Z","steps":["trace[810940815] 'process raft request'  (duration: 186.946255ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:56:34.478437Z","caller":"traceutil/trace.go:171","msg":"trace[599162539] linearizableReadLoop","detail":"{readStateIndex:696; appliedIndex:695; }","duration":"324.630655ms","start":"2024-08-29T18:56:34.153795Z","end":"2024-08-29T18:56:34.478425Z","steps":["trace[599162539] 'read index received'  (duration: 324.36994ms)","trace[599162539] 'applied index is now lower than readState.Index'  (duration: 260.2µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:56:34.478679Z","caller":"traceutil/trace.go:171","msg":"trace[1236827111] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"353.056846ms","start":"2024-08-29T18:56:34.125614Z","end":"2024-08-29T18:56:34.478671Z","steps":["trace[1236827111] 'process raft request'  (duration: 352.61231ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:56:34.479116Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:56:34.125594Z","time spent":"353.118083ms","remote":"127.0.0.1:36040","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3173,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-922931-m03\" mod_revision:637 > success:<request_put:<key:\"/registry/minions/multinode-922931-m03\" value_size:3127 >> failure:<request_range:<key:\"/registry/minions/multinode-922931-m03\" > >"}
	{"level":"warn","ts":"2024-08-29T18:56:34.479313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.394829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.171\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-08-29T18:56:34.481803Z","caller":"traceutil/trace.go:171","msg":"trace[1199810598] range","detail":"{range_begin:/registry/masterleases/192.168.39.171; range_end:; response_count:1; response_revision:658; }","duration":"257.885301ms","start":"2024-08-29T18:56:34.223904Z","end":"2024-08-29T18:56:34.481789Z","steps":["trace[1199810598] 'agreement among raft nodes before linearized reading'  (duration: 255.318841ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:56:34.479369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"325.569017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/multinode-922931-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:56:34.482017Z","caller":"traceutil/trace.go:171","msg":"trace[391663869] range","detail":"{range_begin:/registry/leases/kube-node-lease/multinode-922931-m03; range_end:; response_count:0; response_revision:658; }","duration":"328.218803ms","start":"2024-08-29T18:56:34.153790Z","end":"2024-08-29T18:56:34.482009Z","steps":["trace[391663869] 'agreement among raft nodes before linearized reading'  (duration: 325.555982ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:56:34.482060Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:56:34.153758Z","time spent":"328.288766ms","remote":"127.0.0.1:36116","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":0,"response size":28,"request content":"key:\"/registry/leases/kube-node-lease/multinode-922931-m03\" "}
	{"level":"info","ts":"2024-08-29T18:59:37.971045Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-29T18:59:37.971195Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-922931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"]}
	{"level":"warn","ts":"2024-08-29T18:59:37.971301Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T18:59:37.971395Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T18:59:38.054619Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.171:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T18:59:38.054735Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.171:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-29T18:59:38.056307Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4e6b9cdcc1ed933f","current-leader-member-id":"4e6b9cdcc1ed933f"}
	{"level":"info","ts":"2024-08-29T18:59:38.059253Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-08-29T18:59:38.059385Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-08-29T18:59:38.059405Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-922931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"]}
	
	
	==> etcd [99afe537efacfbfbd23e4a327a7839185834c590327290ee8f584b802064b857] <==
	{"level":"info","ts":"2024-08-29T19:01:19.901310Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","added-peer-id":"4e6b9cdcc1ed933f","added-peer-peer-urls":["https://192.168.39.171:2380"]}
	{"level":"info","ts":"2024-08-29T19:01:19.901459Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:01:19.901483Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:01:19.905542Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:01:19.909568Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T19:01:19.909824Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-08-29T19:01:19.909852Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-08-29T19:01:19.913637Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4e6b9cdcc1ed933f","initial-advertise-peer-urls":["https://192.168.39.171:2380"],"listen-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.171:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:01:19.913695Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:01:21.380478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-29T19:01:21.380625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-29T19:01:21.380678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgPreVoteResp from 4e6b9cdcc1ed933f at term 2"}
	{"level":"info","ts":"2024-08-29T19:01:21.380716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became candidate at term 3"}
	{"level":"info","ts":"2024-08-29T19:01:21.380744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgVoteResp from 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-08-29T19:01:21.380771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became leader at term 3"}
	{"level":"info","ts":"2024-08-29T19:01:21.380797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e6b9cdcc1ed933f elected leader 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-08-29T19:01:21.387484Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:01:21.387451Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4e6b9cdcc1ed933f","local-member-attributes":"{Name:multinode-922931 ClientURLs:[https://192.168.39.171:2379]}","request-path":"/0/members/4e6b9cdcc1ed933f/attributes","cluster-id":"c9ee22fca1de3e71","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:01:21.387740Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:01:21.388731Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:01:21.389344Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:01:21.389518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.171:2379"}
	{"level":"info","ts":"2024-08-29T19:01:21.390714Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T19:01:21.391139Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:01:21.391182Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:03:04 up 9 min,  0 users,  load average: 0.43, 0.28, 0.15
	Linux multinode-922931 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba] <==
	I0829 18:58:56.348283       1 main.go:299] handling current node
	I0829 18:59:06.342773       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 18:59:06.342830       1 main.go:299] handling current node
	I0829 18:59:06.342849       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 18:59:06.342856       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 18:59:06.343003       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0829 18:59:06.343020       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.3.0/24] 
	I0829 18:59:16.342929       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 18:59:16.342985       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 18:59:16.343164       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0829 18:59:16.343186       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.3.0/24] 
	I0829 18:59:16.343242       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 18:59:16.343258       1 main.go:299] handling current node
	I0829 18:59:26.343035       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0829 18:59:26.343234       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.3.0/24] 
	I0829 18:59:26.343409       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 18:59:26.343441       1 main.go:299] handling current node
	I0829 18:59:26.343471       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 18:59:26.343488       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 18:59:36.343505       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 18:59:36.343570       1 main.go:299] handling current node
	I0829 18:59:36.343594       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 18:59:36.343600       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 18:59:36.343765       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0829 18:59:36.343787       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [51cf7b4a2b4277b833fddb33fd5bca910084eabbfd9fab545d3564e743702116] <==
	I0829 19:02:24.445283       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.3.0/24] 
	I0829 19:02:34.444950       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0829 19:02:34.445152       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.3.0/24] 
	I0829 19:02:34.445386       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 19:02:34.445448       1 main.go:299] handling current node
	I0829 19:02:34.445492       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 19:02:34.445563       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 19:02:44.444922       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 19:02:44.444974       1 main.go:299] handling current node
	I0829 19:02:44.445017       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 19:02:44.445023       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 19:02:44.445241       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0829 19:02:44.445261       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.2.0/24] 
	I0829 19:02:54.444951       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 19:02:54.445168       1 main.go:299] handling current node
	I0829 19:02:54.445210       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 19:02:54.445231       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 19:02:54.445460       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0829 19:02:54.445487       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.2.0/24] 
	I0829 19:03:04.447186       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 19:03:04.447219       1 main.go:299] handling current node
	I0829 19:03:04.447235       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 19:03:04.447239       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 19:03:04.447344       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0829 19:03:04.447349       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547] <==
	I0829 18:54:35.920224       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0829 18:54:35.934780       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 18:54:40.353681       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0829 18:54:40.493775       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0829 18:55:57.486579       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49480: use of closed network connection
	E0829 18:55:57.650906       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49512: use of closed network connection
	E0829 18:55:57.818967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49540: use of closed network connection
	E0829 18:55:57.983342       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49554: use of closed network connection
	E0829 18:55:58.146697       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49578: use of closed network connection
	E0829 18:55:58.317382       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49586: use of closed network connection
	E0829 18:55:58.582727       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49618: use of closed network connection
	E0829 18:55:58.742340       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49624: use of closed network connection
	E0829 18:55:58.901851       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49656: use of closed network connection
	E0829 18:55:59.080389       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49670: use of closed network connection
	I0829 18:59:37.970515       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0829 18:59:37.997884       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:37.997943       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:37.997985       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:37.998024       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:37.998119       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:37.998163       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:37.998255       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:38.000190       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:38.000251       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:38.000299       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bf6a4434171610963f1f43e580883fe71e0d47f058737130fe8ae970e0cf41e6] <==
	I0829 19:01:22.697789       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0829 19:01:22.707187       1 aggregator.go:171] initial CRD sync complete...
	I0829 19:01:22.709132       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 19:01:22.709212       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 19:01:22.709238       1 cache.go:39] Caches are synced for autoregister controller
	I0829 19:01:22.718996       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 19:01:22.719098       1 policy_source.go:224] refreshing policies
	I0829 19:01:22.762450       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 19:01:22.762561       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0829 19:01:22.764898       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0829 19:01:22.765017       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 19:01:22.765042       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 19:01:22.765189       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0829 19:01:22.766059       1 shared_informer.go:320] Caches are synced for configmaps
	I0829 19:01:22.768922       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0829 19:01:22.773217       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0829 19:01:22.774972       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0829 19:01:23.568736       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0829 19:01:24.399020       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 19:01:24.530637       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 19:01:24.544969       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 19:01:24.625303       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0829 19:01:24.631980       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0829 19:01:26.369649       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 19:01:26.418018       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e] <==
	I0829 18:57:11.275680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 18:57:11.275954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:12.274397       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-922931-m03\" does not exist"
	I0829 18:57:12.275407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 18:57:12.292933       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-922931-m03" podCIDRs=["10.244.3.0/24"]
	I0829 18:57:12.293039       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:12.293123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:12.301483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:12.310525       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:12.652807       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:14.846176       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:22.445460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:32.655439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:32.655666       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 18:57:32.662784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:34.803652       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:58:14.821257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:58:14.825719       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 18:58:14.828499       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m02"
	I0829 18:58:14.855305       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m02"
	I0829 18:58:14.862438       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:58:14.890512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.818726ms"
	I0829 18:58:14.890596       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.694µs"
	I0829 18:58:19.981289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m02"
	I0829 18:58:30.072646       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	
	
	==> kube-controller-manager [cd16207ad3b78365f5336ee8ac71b51672477a007a026be84553bb048a74af5e] <==
	I0829 19:02:24.037193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m02"
	I0829 19:02:24.044795       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.859µs"
	I0829 19:02:24.057667       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="78.309µs"
	I0829 19:02:26.146991       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m02"
	I0829 19:02:28.431825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.967452ms"
	I0829 19:02:28.431925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.554µs"
	I0829 19:02:35.366138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m02"
	I0829 19:02:41.552876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:41.569001       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:41.808418       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 19:02:41.808583       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:42.901765       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-922931-m03\" does not exist"
	I0829 19:02:42.902914       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 19:02:42.911313       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-922931-m03" podCIDRs=["10.244.2.0/24"]
	I0829 19:02:42.911940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:42.912294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:42.922923       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:43.348737       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:43.661893       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:46.244901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:53.310442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:03:01.125026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:03:01.125637       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 19:03:01.136604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:03:01.163776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	
	
	==> kube-proxy [b8aee643ba50193e37851993c8871e41a13c5fab876e4ca1ea3c17bf44c3ef94] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:01:23.639245       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:01:23.664737       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.171"]
	E0829 19:01:23.664840       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:01:23.725210       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:01:23.725314       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:01:23.725505       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:01:23.728620       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:01:23.729291       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:01:23.729377       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:01:23.730963       1 config.go:197] "Starting service config controller"
	I0829 19:01:23.732177       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:01:23.732766       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:01:23.732857       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:01:23.733903       1 config.go:326] "Starting node config controller"
	I0829 19:01:23.735150       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:01:23.833930       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:01:23.834049       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:01:23.835709       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:54:42.600139       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:54:42.608731       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.171"]
	E0829 18:54:42.608803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:54:42.639303       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:54:42.639383       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:54:42.639409       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:54:42.641513       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:54:42.641808       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:54:42.641830       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:54:42.645464       1 config.go:197] "Starting service config controller"
	I0829 18:54:42.645575       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:54:42.645835       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:54:42.645875       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:54:42.646659       1 config.go:326] "Starting node config controller"
	I0829 18:54:42.646702       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:54:42.745902       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:54:42.745918       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 18:54:42.747364       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [08466cf1de50ca3a78f8fc03c118b634b73459a82a36469ee77808d8b83164ad] <==
	I0829 19:01:20.716250       1 serving.go:386] Generated self-signed cert in-memory
	W0829 19:01:22.642998       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 19:01:22.643207       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 19:01:22.643293       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 19:01:22.643324       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 19:01:22.706035       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 19:01:22.706156       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:01:22.713748       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 19:01:22.713883       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 19:01:22.714619       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 19:01:22.716144       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 19:01:22.814774       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632] <==
	E0829 18:54:32.533899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.431482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:54:33.432163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.492861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:54:33.492914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.569663       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:54:33.570890       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0829 18:54:33.620649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 18:54:33.620746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.623134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:54:33.623228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.781391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:54:33.781517       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.786376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:54:33.786476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.801060       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:54:33.801205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.801235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:54:33.803059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.842805       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:54:33.842855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.852573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:54:33.852654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0829 18:54:36.517979       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0829 18:59:37.977939       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 29 19:01:28 multinode-922931 kubelet[2930]: E0829 19:01:28.862205    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958088861879009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:01:28 multinode-922931 kubelet[2930]: E0829 19:01:28.862268    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958088861879009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:01:38 multinode-922931 kubelet[2930]: E0829 19:01:38.865165    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958098864590988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:01:38 multinode-922931 kubelet[2930]: E0829 19:01:38.865746    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958098864590988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:01:48 multinode-922931 kubelet[2930]: E0829 19:01:48.870696    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958108868801059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:01:48 multinode-922931 kubelet[2930]: E0829 19:01:48.871285    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958108868801059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:01:58 multinode-922931 kubelet[2930]: E0829 19:01:58.873748    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958118873329421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:01:58 multinode-922931 kubelet[2930]: E0829 19:01:58.875014    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958118873329421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:02:08 multinode-922931 kubelet[2930]: E0829 19:02:08.877408    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958128876582712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:02:08 multinode-922931 kubelet[2930]: E0829 19:02:08.877996    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958128876582712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:02:18 multinode-922931 kubelet[2930]: E0829 19:02:18.880201    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958138879861090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:02:18 multinode-922931 kubelet[2930]: E0829 19:02:18.880224    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958138879861090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:02:18 multinode-922931 kubelet[2930]: E0829 19:02:18.882318    2930 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:02:18 multinode-922931 kubelet[2930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:02:18 multinode-922931 kubelet[2930]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:02:18 multinode-922931 kubelet[2930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:02:18 multinode-922931 kubelet[2930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:02:28 multinode-922931 kubelet[2930]: E0829 19:02:28.881499    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958148881184287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:02:28 multinode-922931 kubelet[2930]: E0829 19:02:28.881566    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958148881184287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:02:38 multinode-922931 kubelet[2930]: E0829 19:02:38.885693    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958158883481565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:02:38 multinode-922931 kubelet[2930]: E0829 19:02:38.885955    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958158883481565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:02:48 multinode-922931 kubelet[2930]: E0829 19:02:48.898132    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958168896740281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:02:48 multinode-922931 kubelet[2930]: E0829 19:02:48.898528    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958168896740281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:02:58 multinode-922931 kubelet[2930]: E0829 19:02:58.900448    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958178900193472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:02:58 multinode-922931 kubelet[2930]: E0829 19:02:58.900472    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958178900193472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:03:03.571193   51233 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19531-13056/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-922931 -n multinode-922931
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-922931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (330.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 stop
E0829 19:03:26.706412   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:04:49.633770   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-922931 stop: exit status 82 (2m0.460467038s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-922931-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-922931 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-922931 status: exit status 3 (18.828389912s)

                                                
                                                
-- stdout --
	multinode-922931
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-922931-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:05:26.750592   51882 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E0829 19:05:26.750629   51882 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-922931 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-922931 -n multinode-922931
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-922931 logs -n 25: (1.341415638s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp multinode-922931-m02:/home/docker/cp-test.txt                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931:/home/docker/cp-test_multinode-922931-m02_multinode-922931.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n multinode-922931 sudo cat                                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | /home/docker/cp-test_multinode-922931-m02_multinode-922931.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp multinode-922931-m02:/home/docker/cp-test.txt                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m03:/home/docker/cp-test_multinode-922931-m02_multinode-922931-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n multinode-922931-m03 sudo cat                                   | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | /home/docker/cp-test_multinode-922931-m02_multinode-922931-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp testdata/cp-test.txt                                                | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp multinode-922931-m03:/home/docker/cp-test.txt                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2825152660/001/cp-test_multinode-922931-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp multinode-922931-m03:/home/docker/cp-test.txt                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931:/home/docker/cp-test_multinode-922931-m03_multinode-922931.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n multinode-922931 sudo cat                                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | /home/docker/cp-test_multinode-922931-m03_multinode-922931.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-922931 cp multinode-922931-m03:/home/docker/cp-test.txt                       | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m02:/home/docker/cp-test_multinode-922931-m03_multinode-922931-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n                                                                 | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | multinode-922931-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922931 ssh -n multinode-922931-m02 sudo cat                                   | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	|         | /home/docker/cp-test_multinode-922931-m03_multinode-922931-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-922931 node stop m03                                                          | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:56 UTC |
	| node    | multinode-922931 node start                                                             | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:56 UTC | 29 Aug 24 18:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-922931                                                                | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:57 UTC |                     |
	| stop    | -p multinode-922931                                                                     | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:57 UTC |                     |
	| start   | -p multinode-922931                                                                     | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 18:59 UTC | 29 Aug 24 19:03 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-922931                                                                | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 19:03 UTC |                     |
	| node    | multinode-922931 node delete                                                            | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 19:03 UTC | 29 Aug 24 19:03 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-922931 stop                                                                   | multinode-922931 | jenkins | v1.33.1 | 29 Aug 24 19:03 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:59:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:59:37.054756   50033 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:59:37.054892   50033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:59:37.054902   50033 out.go:358] Setting ErrFile to fd 2...
	I0829 18:59:37.054909   50033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:59:37.055116   50033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:59:37.055712   50033 out.go:352] Setting JSON to false
	I0829 18:59:37.056598   50033 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6124,"bootTime":1724951853,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:59:37.056662   50033 start.go:139] virtualization: kvm guest
	I0829 18:59:37.058937   50033 out.go:177] * [multinode-922931] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:59:37.060091   50033 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:59:37.060092   50033 notify.go:220] Checking for updates...
	I0829 18:59:37.062276   50033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:59:37.063551   50033 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:59:37.064784   50033 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:59:37.066154   50033 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:59:37.067604   50033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:59:37.069353   50033 config.go:182] Loaded profile config "multinode-922931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:59:37.069474   50033 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:59:37.069930   50033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:59:37.069978   50033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:59:37.087206   50033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I0829 18:59:37.087781   50033 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:59:37.088528   50033 main.go:141] libmachine: Using API Version  1
	I0829 18:59:37.088555   50033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:59:37.088945   50033 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:59:37.089119   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 18:59:37.125606   50033 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 18:59:37.126913   50033 start.go:297] selected driver: kvm2
	I0829 18:59:37.126926   50033 start.go:901] validating driver "kvm2" against &{Name:multinode-922931 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-922931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.226 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:59:37.127111   50033 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:59:37.127470   50033 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:59:37.127550   50033 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:59:37.142847   50033 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:59:37.143817   50033 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:59:37.143900   50033 cni.go:84] Creating CNI manager for ""
	I0829 18:59:37.143916   50033 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0829 18:59:37.143996   50033 start.go:340] cluster config:
	{Name:multinode-922931 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-922931 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.226 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:59:37.144176   50033 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:59:37.146960   50033 out.go:177] * Starting "multinode-922931" primary control-plane node in "multinode-922931" cluster
	I0829 18:59:37.148341   50033 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:59:37.148379   50033 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:59:37.148391   50033 cache.go:56] Caching tarball of preloaded images
	I0829 18:59:37.148456   50033 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 18:59:37.148470   50033 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 18:59:37.148610   50033 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/config.json ...
	I0829 18:59:37.148836   50033 start.go:360] acquireMachinesLock for multinode-922931: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 18:59:37.148880   50033 start.go:364] duration metric: took 25.938µs to acquireMachinesLock for "multinode-922931"
	I0829 18:59:37.148905   50033 start.go:96] Skipping create...Using existing machine configuration
	I0829 18:59:37.148916   50033 fix.go:54] fixHost starting: 
	I0829 18:59:37.149190   50033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:59:37.149224   50033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:59:37.163346   50033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
	I0829 18:59:37.163775   50033 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:59:37.164214   50033 main.go:141] libmachine: Using API Version  1
	I0829 18:59:37.164231   50033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:59:37.164621   50033 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:59:37.164818   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 18:59:37.164978   50033 main.go:141] libmachine: (multinode-922931) Calling .GetState
	I0829 18:59:37.166673   50033 fix.go:112] recreateIfNeeded on multinode-922931: state=Running err=<nil>
	W0829 18:59:37.166695   50033 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 18:59:37.169527   50033 out.go:177] * Updating the running kvm2 "multinode-922931" VM ...
	I0829 18:59:37.170874   50033 machine.go:93] provisionDockerMachine start ...
	I0829 18:59:37.170890   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 18:59:37.171074   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:59:37.173497   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.173949   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.173979   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.174077   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 18:59:37.174247   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.174417   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.174559   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 18:59:37.174741   50033 main.go:141] libmachine: Using SSH client type: native
	I0829 18:59:37.175024   50033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0829 18:59:37.175041   50033 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 18:59:37.286982   50033 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-922931
	
	I0829 18:59:37.287015   50033 main.go:141] libmachine: (multinode-922931) Calling .GetMachineName
	I0829 18:59:37.287234   50033 buildroot.go:166] provisioning hostname "multinode-922931"
	I0829 18:59:37.287256   50033 main.go:141] libmachine: (multinode-922931) Calling .GetMachineName
	I0829 18:59:37.287454   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:59:37.290166   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.290526   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.290563   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.290658   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 18:59:37.290840   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.290979   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.291087   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 18:59:37.291247   50033 main.go:141] libmachine: Using SSH client type: native
	I0829 18:59:37.291414   50033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0829 18:59:37.291430   50033 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-922931 && echo "multinode-922931" | sudo tee /etc/hostname
	I0829 18:59:37.413402   50033 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-922931
	
	I0829 18:59:37.413432   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:59:37.416286   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.416722   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.416749   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.416941   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 18:59:37.417144   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.417292   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.417396   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 18:59:37.417541   50033 main.go:141] libmachine: Using SSH client type: native
	I0829 18:59:37.417728   50033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0829 18:59:37.417746   50033 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-922931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-922931/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-922931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:59:37.526697   50033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:59:37.526729   50033 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 18:59:37.526768   50033 buildroot.go:174] setting up certificates
	I0829 18:59:37.526784   50033 provision.go:84] configureAuth start
	I0829 18:59:37.526804   50033 main.go:141] libmachine: (multinode-922931) Calling .GetMachineName
	I0829 18:59:37.527079   50033 main.go:141] libmachine: (multinode-922931) Calling .GetIP
	I0829 18:59:37.529995   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.530386   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.530412   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.530562   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:59:37.532953   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.533328   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.533382   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.533438   50033 provision.go:143] copyHostCerts
	I0829 18:59:37.533481   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:59:37.533514   50033 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 18:59:37.533531   50033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 18:59:37.533623   50033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 18:59:37.533719   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:59:37.533739   50033 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 18:59:37.533743   50033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 18:59:37.533768   50033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 18:59:37.533815   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:59:37.533837   50033 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 18:59:37.533840   50033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 18:59:37.533860   50033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 18:59:37.533908   50033 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.multinode-922931 san=[127.0.0.1 192.168.39.171 localhost minikube multinode-922931]
	I0829 18:59:37.682359   50033 provision.go:177] copyRemoteCerts
	I0829 18:59:37.682418   50033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:59:37.682443   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:59:37.685371   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.685742   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.685763   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.685992   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 18:59:37.686152   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.686316   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 18:59:37.686465   50033 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/multinode-922931/id_rsa Username:docker}
	I0829 18:59:37.773066   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0829 18:59:37.773156   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 18:59:37.799212   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0829 18:59:37.799304   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0829 18:59:37.826153   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0829 18:59:37.826232   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 18:59:37.848886   50033 provision.go:87] duration metric: took 322.08952ms to configureAuth
	I0829 18:59:37.848912   50033 buildroot.go:189] setting minikube options for container-runtime
	I0829 18:59:37.849146   50033 config.go:182] Loaded profile config "multinode-922931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:59:37.849228   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:59:37.852277   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.852669   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:59:37.852711   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:59:37.852890   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 18:59:37.853091   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.853244   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:59:37.853403   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 18:59:37.853556   50033 main.go:141] libmachine: Using SSH client type: native
	I0829 18:59:37.853761   50033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0829 18:59:37.853781   50033 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:01:08.561984   50033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:01:08.562012   50033 machine.go:96] duration metric: took 1m31.391127481s to provisionDockerMachine
	I0829 19:01:08.562051   50033 start.go:293] postStartSetup for "multinode-922931" (driver="kvm2")
	I0829 19:01:08.562065   50033 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:01:08.562085   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 19:01:08.562641   50033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:01:08.562676   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 19:01:08.565987   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.566416   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 19:01:08.566439   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.566622   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 19:01:08.566820   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 19:01:08.566983   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 19:01:08.567117   50033 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/multinode-922931/id_rsa Username:docker}
	I0829 19:01:08.653170   50033 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:01:08.657184   50033 command_runner.go:130] > NAME=Buildroot
	I0829 19:01:08.657205   50033 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0829 19:01:08.657210   50033 command_runner.go:130] > ID=buildroot
	I0829 19:01:08.657215   50033 command_runner.go:130] > VERSION_ID=2023.02.9
	I0829 19:01:08.657220   50033 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0829 19:01:08.657250   50033 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:01:08.657261   50033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:01:08.657323   50033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:01:08.657428   50033 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:01:08.657440   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /etc/ssl/certs/202592.pem
	I0829 19:01:08.657553   50033 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:01:08.666679   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:01:08.690150   50033 start.go:296] duration metric: took 128.083581ms for postStartSetup
	I0829 19:01:08.690207   50033 fix.go:56] duration metric: took 1m31.541290233s for fixHost
	I0829 19:01:08.690231   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 19:01:08.693191   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.693553   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 19:01:08.693611   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.693732   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 19:01:08.693911   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 19:01:08.694037   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 19:01:08.694271   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 19:01:08.694453   50033 main.go:141] libmachine: Using SSH client type: native
	I0829 19:01:08.694624   50033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0829 19:01:08.694637   50033 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:01:08.802640   50033 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724958068.775996167
	
	I0829 19:01:08.802674   50033 fix.go:216] guest clock: 1724958068.775996167
	I0829 19:01:08.802687   50033 fix.go:229] Guest: 2024-08-29 19:01:08.775996167 +0000 UTC Remote: 2024-08-29 19:01:08.690213116 +0000 UTC m=+91.672633372 (delta=85.783051ms)
	I0829 19:01:08.802725   50033 fix.go:200] guest clock delta is within tolerance: 85.783051ms
	I0829 19:01:08.802735   50033 start.go:83] releasing machines lock for "multinode-922931", held for 1m31.65384268s
	I0829 19:01:08.802773   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 19:01:08.803067   50033 main.go:141] libmachine: (multinode-922931) Calling .GetIP
	I0829 19:01:08.806035   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.806445   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 19:01:08.806465   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.806607   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 19:01:08.807113   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 19:01:08.807314   50033 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 19:01:08.807398   50033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:01:08.807441   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 19:01:08.807553   50033 ssh_runner.go:195] Run: cat /version.json
	I0829 19:01:08.807582   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 19:01:08.810026   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.810359   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.810438   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 19:01:08.810473   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.810559   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 19:01:08.810716   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 19:01:08.810736   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 19:01:08.810753   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:08.810862   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 19:01:08.810920   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 19:01:08.810990   50033 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/multinode-922931/id_rsa Username:docker}
	I0829 19:01:08.811100   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 19:01:08.811243   50033 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 19:01:08.811449   50033 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/multinode-922931/id_rsa Username:docker}
	I0829 19:01:08.919687   50033 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0829 19:01:08.919748   50033 command_runner.go:130] > {"iso_version": "v1.33.1-1724775098-19521", "kicbase_version": "v0.0.44-1724667927-19511", "minikube_version": "v1.33.1", "commit": "0d49494423856821e9b08161b42ba19c667a6f89"}
	I0829 19:01:08.919873   50033 ssh_runner.go:195] Run: systemctl --version
	I0829 19:01:08.926000   50033 command_runner.go:130] > systemd 252 (252)
	I0829 19:01:08.926030   50033 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0829 19:01:08.926375   50033 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:01:09.082295   50033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0829 19:01:09.089670   50033 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0829 19:01:09.089733   50033 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:01:09.089816   50033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:01:09.098593   50033 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 19:01:09.098615   50033 start.go:495] detecting cgroup driver to use...
	I0829 19:01:09.098688   50033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:01:09.113832   50033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:01:09.126834   50033 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:01:09.126905   50033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:01:09.139817   50033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:01:09.152680   50033 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:01:09.291184   50033 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:01:09.450754   50033 docker.go:233] disabling docker service ...
	I0829 19:01:09.450824   50033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:01:09.466198   50033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:01:09.480007   50033 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:01:09.613161   50033 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:01:09.750633   50033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:01:09.764220   50033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:01:09.781185   50033 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0829 19:01:09.781477   50033 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:01:09.781533   50033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.792622   50033 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:01:09.792699   50033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.805343   50033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.817143   50033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.827985   50033 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:01:09.837753   50033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.847432   50033 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.857381   50033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:01:09.867062   50033 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:01:09.875783   50033 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0829 19:01:09.875883   50033 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:01:09.884458   50033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:01:10.018103   50033 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:01:16.116115   50033 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.097970116s)
	I0829 19:01:16.116142   50033 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:01:16.116187   50033 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:01:16.120649   50033 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0829 19:01:16.120678   50033 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0829 19:01:16.120688   50033 command_runner.go:130] > Device: 0,22	Inode: 1331        Links: 1
	I0829 19:01:16.120697   50033 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0829 19:01:16.120704   50033 command_runner.go:130] > Access: 2024-08-29 19:01:15.991811964 +0000
	I0829 19:01:16.120714   50033 command_runner.go:130] > Modify: 2024-08-29 19:01:15.991811964 +0000
	I0829 19:01:16.120725   50033 command_runner.go:130] > Change: 2024-08-29 19:01:15.991811964 +0000
	I0829 19:01:16.120732   50033 command_runner.go:130] >  Birth: -
	I0829 19:01:16.120762   50033 start.go:563] Will wait 60s for crictl version
	I0829 19:01:16.120810   50033 ssh_runner.go:195] Run: which crictl
	I0829 19:01:16.124436   50033 command_runner.go:130] > /usr/bin/crictl
	I0829 19:01:16.124487   50033 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:01:16.159113   50033 command_runner.go:130] > Version:  0.1.0
	I0829 19:01:16.159137   50033 command_runner.go:130] > RuntimeName:  cri-o
	I0829 19:01:16.159155   50033 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0829 19:01:16.159163   50033 command_runner.go:130] > RuntimeApiVersion:  v1
	I0829 19:01:16.159245   50033 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:01:16.159302   50033 ssh_runner.go:195] Run: crio --version
	I0829 19:01:16.187486   50033 command_runner.go:130] > crio version 1.29.1
	I0829 19:01:16.187508   50033 command_runner.go:130] > Version:        1.29.1
	I0829 19:01:16.187516   50033 command_runner.go:130] > GitCommit:      unknown
	I0829 19:01:16.187521   50033 command_runner.go:130] > GitCommitDate:  unknown
	I0829 19:01:16.187526   50033 command_runner.go:130] > GitTreeState:   clean
	I0829 19:01:16.187533   50033 command_runner.go:130] > BuildDate:      2024-08-27T21:29:17Z
	I0829 19:01:16.187540   50033 command_runner.go:130] > GoVersion:      go1.21.6
	I0829 19:01:16.187546   50033 command_runner.go:130] > Compiler:       gc
	I0829 19:01:16.187552   50033 command_runner.go:130] > Platform:       linux/amd64
	I0829 19:01:16.187558   50033 command_runner.go:130] > Linkmode:       dynamic
	I0829 19:01:16.187564   50033 command_runner.go:130] > BuildTags:      
	I0829 19:01:16.187572   50033 command_runner.go:130] >   containers_image_ostree_stub
	I0829 19:01:16.187580   50033 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0829 19:01:16.187588   50033 command_runner.go:130] >   btrfs_noversion
	I0829 19:01:16.187599   50033 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0829 19:01:16.187607   50033 command_runner.go:130] >   libdm_no_deferred_remove
	I0829 19:01:16.187613   50033 command_runner.go:130] >   seccomp
	I0829 19:01:16.187621   50033 command_runner.go:130] > LDFlags:          unknown
	I0829 19:01:16.187627   50033 command_runner.go:130] > SeccompEnabled:   true
	I0829 19:01:16.187634   50033 command_runner.go:130] > AppArmorEnabled:  false
	I0829 19:01:16.188693   50033 ssh_runner.go:195] Run: crio --version
	I0829 19:01:16.216213   50033 command_runner.go:130] > crio version 1.29.1
	I0829 19:01:16.216233   50033 command_runner.go:130] > Version:        1.29.1
	I0829 19:01:16.216238   50033 command_runner.go:130] > GitCommit:      unknown
	I0829 19:01:16.216242   50033 command_runner.go:130] > GitCommitDate:  unknown
	I0829 19:01:16.216246   50033 command_runner.go:130] > GitTreeState:   clean
	I0829 19:01:16.216251   50033 command_runner.go:130] > BuildDate:      2024-08-27T21:29:17Z
	I0829 19:01:16.216255   50033 command_runner.go:130] > GoVersion:      go1.21.6
	I0829 19:01:16.216259   50033 command_runner.go:130] > Compiler:       gc
	I0829 19:01:16.216263   50033 command_runner.go:130] > Platform:       linux/amd64
	I0829 19:01:16.216267   50033 command_runner.go:130] > Linkmode:       dynamic
	I0829 19:01:16.216271   50033 command_runner.go:130] > BuildTags:      
	I0829 19:01:16.216276   50033 command_runner.go:130] >   containers_image_ostree_stub
	I0829 19:01:16.216280   50033 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0829 19:01:16.216293   50033 command_runner.go:130] >   btrfs_noversion
	I0829 19:01:16.216300   50033 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0829 19:01:16.216303   50033 command_runner.go:130] >   libdm_no_deferred_remove
	I0829 19:01:16.216307   50033 command_runner.go:130] >   seccomp
	I0829 19:01:16.216313   50033 command_runner.go:130] > LDFlags:          unknown
	I0829 19:01:16.216318   50033 command_runner.go:130] > SeccompEnabled:   true
	I0829 19:01:16.216325   50033 command_runner.go:130] > AppArmorEnabled:  false
	I0829 19:01:16.219123   50033 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:01:16.220547   50033 main.go:141] libmachine: (multinode-922931) Calling .GetIP
	I0829 19:01:16.223443   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:16.223792   50033 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 19:01:16.223821   50033 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 19:01:16.223987   50033 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:01:16.227881   50033 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0829 19:01:16.227994   50033 kubeadm.go:883] updating cluster {Name:multinode-922931 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-922931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.226 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:01:16.228123   50033 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:01:16.228179   50033 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:01:16.272868   50033 command_runner.go:130] > {
	I0829 19:01:16.272894   50033 command_runner.go:130] >   "images": [
	I0829 19:01:16.272916   50033 command_runner.go:130] >     {
	I0829 19:01:16.272926   50033 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0829 19:01:16.272933   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.272942   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0829 19:01:16.272948   50033 command_runner.go:130] >       ],
	I0829 19:01:16.272955   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.272967   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0829 19:01:16.272982   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0829 19:01:16.272990   50033 command_runner.go:130] >       ],
	I0829 19:01:16.272996   50033 command_runner.go:130] >       "size": "87165492",
	I0829 19:01:16.273002   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.273006   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.273017   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.273021   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.273027   50033 command_runner.go:130] >     },
	I0829 19:01:16.273030   50033 command_runner.go:130] >     {
	I0829 19:01:16.273036   50033 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0829 19:01:16.273041   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.273046   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0829 19:01:16.273050   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273053   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.273060   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0829 19:01:16.273069   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0829 19:01:16.273073   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273079   50033 command_runner.go:130] >       "size": "87190579",
	I0829 19:01:16.273083   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.273093   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.273100   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.273109   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.273116   50033 command_runner.go:130] >     },
	I0829 19:01:16.273124   50033 command_runner.go:130] >     {
	I0829 19:01:16.273136   50033 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0829 19:01:16.273145   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.273154   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0829 19:01:16.273162   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273171   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.273193   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0829 19:01:16.273207   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0829 19:01:16.273215   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273222   50033 command_runner.go:130] >       "size": "1363676",
	I0829 19:01:16.273230   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.273239   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.273249   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.273259   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.273266   50033 command_runner.go:130] >     },
	I0829 19:01:16.273275   50033 command_runner.go:130] >     {
	I0829 19:01:16.273287   50033 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0829 19:01:16.273297   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.273308   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0829 19:01:16.273324   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273333   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.273348   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0829 19:01:16.273377   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0829 19:01:16.273386   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273393   50033 command_runner.go:130] >       "size": "31470524",
	I0829 19:01:16.273405   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.273415   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.273423   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.273431   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.273440   50033 command_runner.go:130] >     },
	I0829 19:01:16.273449   50033 command_runner.go:130] >     {
	I0829 19:01:16.273461   50033 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0829 19:01:16.273470   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.273488   50033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0829 19:01:16.273496   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273505   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.273518   50033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0829 19:01:16.273532   50033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0829 19:01:16.273540   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273550   50033 command_runner.go:130] >       "size": "61245718",
	I0829 19:01:16.273558   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.273565   50033 command_runner.go:130] >       "username": "nonroot",
	I0829 19:01:16.273667   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.273791   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.273803   50033 command_runner.go:130] >     },
	I0829 19:01:16.273809   50033 command_runner.go:130] >     {
	I0829 19:01:16.273820   50033 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0829 19:01:16.273831   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.273845   50033 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0829 19:01:16.273867   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273874   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.273889   50033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0829 19:01:16.273905   50033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0829 19:01:16.273910   50033 command_runner.go:130] >       ],
	I0829 19:01:16.273917   50033 command_runner.go:130] >       "size": "149009664",
	I0829 19:01:16.273923   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.273929   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.273939   50033 command_runner.go:130] >       },
	I0829 19:01:16.273967   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.274027   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.274034   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.274038   50033 command_runner.go:130] >     },
	I0829 19:01:16.274042   50033 command_runner.go:130] >     {
	I0829 19:01:16.274051   50033 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0829 19:01:16.274062   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.274072   50033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0829 19:01:16.274077   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274084   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.274114   50033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0829 19:01:16.274130   50033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0829 19:01:16.274134   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274144   50033 command_runner.go:130] >       "size": "95233506",
	I0829 19:01:16.274148   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.274153   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.274156   50033 command_runner.go:130] >       },
	I0829 19:01:16.274159   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.274163   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.274170   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.274175   50033 command_runner.go:130] >     },
	I0829 19:01:16.274180   50033 command_runner.go:130] >     {
	I0829 19:01:16.274191   50033 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0829 19:01:16.274198   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.274211   50033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0829 19:01:16.274217   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274224   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.274249   50033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0829 19:01:16.274264   50033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0829 19:01:16.274270   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274277   50033 command_runner.go:130] >       "size": "89437512",
	I0829 19:01:16.274283   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.274296   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.274301   50033 command_runner.go:130] >       },
	I0829 19:01:16.274310   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.274316   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.274322   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.274334   50033 command_runner.go:130] >     },
	I0829 19:01:16.274338   50033 command_runner.go:130] >     {
	I0829 19:01:16.274349   50033 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0829 19:01:16.274356   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.274364   50033 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0829 19:01:16.274370   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274381   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.274396   50033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0829 19:01:16.274413   50033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0829 19:01:16.274418   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274423   50033 command_runner.go:130] >       "size": "92728217",
	I0829 19:01:16.274426   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.274431   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.274438   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.274450   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.274456   50033 command_runner.go:130] >     },
	I0829 19:01:16.274461   50033 command_runner.go:130] >     {
	I0829 19:01:16.274472   50033 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0829 19:01:16.274478   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.274491   50033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0829 19:01:16.274497   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274504   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.274511   50033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0829 19:01:16.274526   50033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0829 19:01:16.274532   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274539   50033 command_runner.go:130] >       "size": "68420936",
	I0829 19:01:16.274545   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.274557   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.274563   50033 command_runner.go:130] >       },
	I0829 19:01:16.274569   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.274575   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.274581   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.274587   50033 command_runner.go:130] >     },
	I0829 19:01:16.274591   50033 command_runner.go:130] >     {
	I0829 19:01:16.274599   50033 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0829 19:01:16.274605   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.274618   50033 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0829 19:01:16.274625   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274642   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.274653   50033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0829 19:01:16.274670   50033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0829 19:01:16.274675   50033 command_runner.go:130] >       ],
	I0829 19:01:16.274680   50033 command_runner.go:130] >       "size": "742080",
	I0829 19:01:16.274685   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.274692   50033 command_runner.go:130] >         "value": "65535"
	I0829 19:01:16.274697   50033 command_runner.go:130] >       },
	I0829 19:01:16.274704   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.274715   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.274721   50033 command_runner.go:130] >       "pinned": true
	I0829 19:01:16.274727   50033 command_runner.go:130] >     }
	I0829 19:01:16.274732   50033 command_runner.go:130] >   ]
	I0829 19:01:16.274736   50033 command_runner.go:130] > }
	I0829 19:01:16.275025   50033 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:01:16.275036   50033 crio.go:433] Images already preloaded, skipping extraction
	I0829 19:01:16.275136   50033 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:01:16.307070   50033 command_runner.go:130] > {
	I0829 19:01:16.307095   50033 command_runner.go:130] >   "images": [
	I0829 19:01:16.307103   50033 command_runner.go:130] >     {
	I0829 19:01:16.307113   50033 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0829 19:01:16.307120   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307128   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0829 19:01:16.307133   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307138   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.307152   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0829 19:01:16.307169   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0829 19:01:16.307177   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307185   50033 command_runner.go:130] >       "size": "87165492",
	I0829 19:01:16.307192   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.307202   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.307211   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.307218   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.307224   50033 command_runner.go:130] >     },
	I0829 19:01:16.307232   50033 command_runner.go:130] >     {
	I0829 19:01:16.307242   50033 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0829 19:01:16.307249   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307258   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0829 19:01:16.307264   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307271   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.307283   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0829 19:01:16.307295   50033 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0829 19:01:16.307302   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307309   50033 command_runner.go:130] >       "size": "87190579",
	I0829 19:01:16.307316   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.307326   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.307335   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.307342   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.307348   50033 command_runner.go:130] >     },
	I0829 19:01:16.307365   50033 command_runner.go:130] >     {
	I0829 19:01:16.307376   50033 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0829 19:01:16.307385   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307395   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0829 19:01:16.307402   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307410   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.307421   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0829 19:01:16.307434   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0829 19:01:16.307441   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307451   50033 command_runner.go:130] >       "size": "1363676",
	I0829 19:01:16.307460   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.307467   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.307484   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.307493   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.307499   50033 command_runner.go:130] >     },
	I0829 19:01:16.307505   50033 command_runner.go:130] >     {
	I0829 19:01:16.307516   50033 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0829 19:01:16.307524   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307534   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0829 19:01:16.307542   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307550   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.307566   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0829 19:01:16.307586   50033 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0829 19:01:16.307595   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307603   50033 command_runner.go:130] >       "size": "31470524",
	I0829 19:01:16.307611   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.307617   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.307624   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.307633   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.307640   50033 command_runner.go:130] >     },
	I0829 19:01:16.307648   50033 command_runner.go:130] >     {
	I0829 19:01:16.307659   50033 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0829 19:01:16.307668   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307677   50033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0829 19:01:16.307685   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307692   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.307707   50033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0829 19:01:16.307722   50033 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0829 19:01:16.307731   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307739   50033 command_runner.go:130] >       "size": "61245718",
	I0829 19:01:16.307750   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.307760   50033 command_runner.go:130] >       "username": "nonroot",
	I0829 19:01:16.307769   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.307779   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.307786   50033 command_runner.go:130] >     },
	I0829 19:01:16.307792   50033 command_runner.go:130] >     {
	I0829 19:01:16.307803   50033 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0829 19:01:16.307812   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307820   50033 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0829 19:01:16.307829   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307836   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.307850   50033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0829 19:01:16.307864   50033 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0829 19:01:16.307873   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307880   50033 command_runner.go:130] >       "size": "149009664",
	I0829 19:01:16.307887   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.307897   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.307909   50033 command_runner.go:130] >       },
	I0829 19:01:16.307918   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.307925   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.307935   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.307943   50033 command_runner.go:130] >     },
	I0829 19:01:16.307950   50033 command_runner.go:130] >     {
	I0829 19:01:16.307960   50033 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0829 19:01:16.307969   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.307981   50033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0829 19:01:16.307990   50033 command_runner.go:130] >       ],
	I0829 19:01:16.307997   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.308013   50033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0829 19:01:16.308028   50033 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0829 19:01:16.308037   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308045   50033 command_runner.go:130] >       "size": "95233506",
	I0829 19:01:16.308054   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.308064   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.308072   50033 command_runner.go:130] >       },
	I0829 19:01:16.308080   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.308090   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.308098   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.308106   50033 command_runner.go:130] >     },
	I0829 19:01:16.308113   50033 command_runner.go:130] >     {
	I0829 19:01:16.308125   50033 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0829 19:01:16.308133   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.308144   50033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0829 19:01:16.308152   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308160   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.308187   50033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0829 19:01:16.308202   50033 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0829 19:01:16.308208   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308216   50033 command_runner.go:130] >       "size": "89437512",
	I0829 19:01:16.308225   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.308232   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.308240   50033 command_runner.go:130] >       },
	I0829 19:01:16.308248   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.308257   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.308265   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.308273   50033 command_runner.go:130] >     },
	I0829 19:01:16.308279   50033 command_runner.go:130] >     {
	I0829 19:01:16.308290   50033 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0829 19:01:16.308299   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.308310   50033 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0829 19:01:16.308318   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308324   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.308337   50033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0829 19:01:16.308360   50033 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0829 19:01:16.308369   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308376   50033 command_runner.go:130] >       "size": "92728217",
	I0829 19:01:16.308384   50033 command_runner.go:130] >       "uid": null,
	I0829 19:01:16.308395   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.308404   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.308412   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.308420   50033 command_runner.go:130] >     },
	I0829 19:01:16.308427   50033 command_runner.go:130] >     {
	I0829 19:01:16.308439   50033 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0829 19:01:16.308448   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.308458   50033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0829 19:01:16.308465   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308475   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.308489   50033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0829 19:01:16.308504   50033 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0829 19:01:16.308512   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308519   50033 command_runner.go:130] >       "size": "68420936",
	I0829 19:01:16.308528   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.308535   50033 command_runner.go:130] >         "value": "0"
	I0829 19:01:16.308543   50033 command_runner.go:130] >       },
	I0829 19:01:16.308550   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.308559   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.308567   50033 command_runner.go:130] >       "pinned": false
	I0829 19:01:16.308575   50033 command_runner.go:130] >     },
	I0829 19:01:16.308581   50033 command_runner.go:130] >     {
	I0829 19:01:16.308592   50033 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0829 19:01:16.308601   50033 command_runner.go:130] >       "repoTags": [
	I0829 19:01:16.308609   50033 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0829 19:01:16.308619   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308626   50033 command_runner.go:130] >       "repoDigests": [
	I0829 19:01:16.308640   50033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0829 19:01:16.308655   50033 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0829 19:01:16.308663   50033 command_runner.go:130] >       ],
	I0829 19:01:16.308671   50033 command_runner.go:130] >       "size": "742080",
	I0829 19:01:16.308679   50033 command_runner.go:130] >       "uid": {
	I0829 19:01:16.308687   50033 command_runner.go:130] >         "value": "65535"
	I0829 19:01:16.308695   50033 command_runner.go:130] >       },
	I0829 19:01:16.308703   50033 command_runner.go:130] >       "username": "",
	I0829 19:01:16.308712   50033 command_runner.go:130] >       "spec": null,
	I0829 19:01:16.308722   50033 command_runner.go:130] >       "pinned": true
	I0829 19:01:16.308728   50033 command_runner.go:130] >     }
	I0829 19:01:16.308736   50033 command_runner.go:130] >   ]
	I0829 19:01:16.308742   50033 command_runner.go:130] > }
	I0829 19:01:16.308868   50033 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:01:16.308880   50033 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:01:16.308889   50033 kubeadm.go:934] updating node { 192.168.39.171 8443 v1.31.0 crio true true} ...
	I0829 19:01:16.309014   50033 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-922931 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-922931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:01:16.309135   50033 ssh_runner.go:195] Run: crio config
	I0829 19:01:16.347133   50033 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0829 19:01:16.347160   50033 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0829 19:01:16.347170   50033 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0829 19:01:16.347174   50033 command_runner.go:130] > #
	I0829 19:01:16.347184   50033 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0829 19:01:16.347192   50033 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0829 19:01:16.347201   50033 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0829 19:01:16.347211   50033 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0829 19:01:16.347218   50033 command_runner.go:130] > # reload'.
	I0829 19:01:16.347228   50033 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0829 19:01:16.347240   50033 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0829 19:01:16.347249   50033 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0829 19:01:16.347259   50033 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0829 19:01:16.347268   50033 command_runner.go:130] > [crio]
	I0829 19:01:16.347278   50033 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0829 19:01:16.347290   50033 command_runner.go:130] > # containers images, in this directory.
	I0829 19:01:16.347374   50033 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0829 19:01:16.347402   50033 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0829 19:01:16.347415   50033 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0829 19:01:16.347430   50033 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0829 19:01:16.347439   50033 command_runner.go:130] > # imagestore = ""
	I0829 19:01:16.347448   50033 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0829 19:01:16.347460   50033 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0829 19:01:16.347470   50033 command_runner.go:130] > storage_driver = "overlay"
	I0829 19:01:16.347482   50033 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0829 19:01:16.347494   50033 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0829 19:01:16.347504   50033 command_runner.go:130] > storage_option = [
	I0829 19:01:16.347514   50033 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0829 19:01:16.347522   50033 command_runner.go:130] > ]
	I0829 19:01:16.347533   50033 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0829 19:01:16.347557   50033 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0829 19:01:16.347569   50033 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0829 19:01:16.347580   50033 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0829 19:01:16.347593   50033 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0829 19:01:16.347604   50033 command_runner.go:130] > # always happen on a node reboot
	I0829 19:01:16.347615   50033 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0829 19:01:16.347630   50033 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0829 19:01:16.347642   50033 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0829 19:01:16.347655   50033 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0829 19:01:16.347668   50033 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0829 19:01:16.347682   50033 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0829 19:01:16.347699   50033 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0829 19:01:16.347711   50033 command_runner.go:130] > # internal_wipe = true
	I0829 19:01:16.347724   50033 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0829 19:01:16.347737   50033 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0829 19:01:16.347745   50033 command_runner.go:130] > # internal_repair = false
	I0829 19:01:16.347774   50033 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0829 19:01:16.347790   50033 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0829 19:01:16.347803   50033 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0829 19:01:16.347811   50033 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0829 19:01:16.347824   50033 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0829 19:01:16.347830   50033 command_runner.go:130] > [crio.api]
	I0829 19:01:16.347839   50033 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0829 19:01:16.347849   50033 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0829 19:01:16.347861   50033 command_runner.go:130] > # IP address on which the stream server will listen.
	I0829 19:01:16.347871   50033 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0829 19:01:16.347884   50033 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0829 19:01:16.347896   50033 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0829 19:01:16.347904   50033 command_runner.go:130] > # stream_port = "0"
	I0829 19:01:16.347913   50033 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0829 19:01:16.347922   50033 command_runner.go:130] > # stream_enable_tls = false
	I0829 19:01:16.347931   50033 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0829 19:01:16.347942   50033 command_runner.go:130] > # stream_idle_timeout = ""
	I0829 19:01:16.347952   50033 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0829 19:01:16.347963   50033 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0829 19:01:16.347971   50033 command_runner.go:130] > # minutes.
	I0829 19:01:16.347979   50033 command_runner.go:130] > # stream_tls_cert = ""
	I0829 19:01:16.347998   50033 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0829 19:01:16.348010   50033 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0829 19:01:16.348040   50033 command_runner.go:130] > # stream_tls_key = ""
	I0829 19:01:16.348059   50033 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0829 19:01:16.348073   50033 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0829 19:01:16.348103   50033 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0829 19:01:16.348114   50033 command_runner.go:130] > # stream_tls_ca = ""
	I0829 19:01:16.348129   50033 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0829 19:01:16.348140   50033 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0829 19:01:16.348151   50033 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0829 19:01:16.348165   50033 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0829 19:01:16.348175   50033 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0829 19:01:16.348187   50033 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0829 19:01:16.348195   50033 command_runner.go:130] > [crio.runtime]
	I0829 19:01:16.348204   50033 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0829 19:01:16.348213   50033 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0829 19:01:16.348222   50033 command_runner.go:130] > # "nofile=1024:2048"
	I0829 19:01:16.348232   50033 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0829 19:01:16.348241   50033 command_runner.go:130] > # default_ulimits = [
	I0829 19:01:16.348247   50033 command_runner.go:130] > # ]
	I0829 19:01:16.348256   50033 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0829 19:01:16.348266   50033 command_runner.go:130] > # no_pivot = false
	I0829 19:01:16.348275   50033 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0829 19:01:16.348287   50033 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0829 19:01:16.348298   50033 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0829 19:01:16.348308   50033 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0829 19:01:16.348318   50033 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0829 19:01:16.348328   50033 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0829 19:01:16.348335   50033 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0829 19:01:16.348344   50033 command_runner.go:130] > # Cgroup setting for conmon
	I0829 19:01:16.348374   50033 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0829 19:01:16.348386   50033 command_runner.go:130] > conmon_cgroup = "pod"
	I0829 19:01:16.348399   50033 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0829 19:01:16.348410   50033 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0829 19:01:16.348422   50033 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0829 19:01:16.348430   50033 command_runner.go:130] > conmon_env = [
	I0829 19:01:16.348435   50033 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0829 19:01:16.348445   50033 command_runner.go:130] > ]
	I0829 19:01:16.348466   50033 command_runner.go:130] > # Additional environment variables to set for all the
	I0829 19:01:16.348479   50033 command_runner.go:130] > # containers. These are overridden if set in the
	I0829 19:01:16.348490   50033 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0829 19:01:16.348499   50033 command_runner.go:130] > # default_env = [
	I0829 19:01:16.348507   50033 command_runner.go:130] > # ]
	I0829 19:01:16.348516   50033 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0829 19:01:16.348528   50033 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0829 19:01:16.348535   50033 command_runner.go:130] > # selinux = false
	I0829 19:01:16.348544   50033 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0829 19:01:16.348557   50033 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0829 19:01:16.348570   50033 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0829 19:01:16.348578   50033 command_runner.go:130] > # seccomp_profile = ""
	I0829 19:01:16.348590   50033 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0829 19:01:16.348601   50033 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0829 19:01:16.348613   50033 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0829 19:01:16.348619   50033 command_runner.go:130] > # which might increase security.
	I0829 19:01:16.348627   50033 command_runner.go:130] > # This option is currently deprecated,
	I0829 19:01:16.348639   50033 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0829 19:01:16.348650   50033 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0829 19:01:16.348662   50033 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0829 19:01:16.348673   50033 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0829 19:01:16.348685   50033 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0829 19:01:16.348696   50033 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0829 19:01:16.348704   50033 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:01:16.348710   50033 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0829 19:01:16.348722   50033 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0829 19:01:16.348732   50033 command_runner.go:130] > # the cgroup blockio controller.
	I0829 19:01:16.348742   50033 command_runner.go:130] > # blockio_config_file = ""
	I0829 19:01:16.348755   50033 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0829 19:01:16.348765   50033 command_runner.go:130] > # blockio parameters.
	I0829 19:01:16.348901   50033 command_runner.go:130] > # blockio_reload = false
	I0829 19:01:16.348918   50033 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0829 19:01:16.348927   50033 command_runner.go:130] > # irqbalance daemon.
	I0829 19:01:16.349177   50033 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0829 19:01:16.349191   50033 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0829 19:01:16.349204   50033 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0829 19:01:16.349224   50033 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0829 19:01:16.349402   50033 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0829 19:01:16.349417   50033 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0829 19:01:16.349425   50033 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:01:16.350028   50033 command_runner.go:130] > # rdt_config_file = ""
	I0829 19:01:16.350044   50033 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0829 19:01:16.350049   50033 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0829 19:01:16.350084   50033 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0829 19:01:16.350109   50033 command_runner.go:130] > # separate_pull_cgroup = ""
	I0829 19:01:16.350119   50033 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0829 19:01:16.350132   50033 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0829 19:01:16.350140   50033 command_runner.go:130] > # will be added.
	I0829 19:01:16.350144   50033 command_runner.go:130] > # default_capabilities = [
	I0829 19:01:16.350147   50033 command_runner.go:130] > # 	"CHOWN",
	I0829 19:01:16.350151   50033 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0829 19:01:16.350157   50033 command_runner.go:130] > # 	"FSETID",
	I0829 19:01:16.350161   50033 command_runner.go:130] > # 	"FOWNER",
	I0829 19:01:16.350165   50033 command_runner.go:130] > # 	"SETGID",
	I0829 19:01:16.350171   50033 command_runner.go:130] > # 	"SETUID",
	I0829 19:01:16.350175   50033 command_runner.go:130] > # 	"SETPCAP",
	I0829 19:01:16.350181   50033 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0829 19:01:16.350186   50033 command_runner.go:130] > # 	"KILL",
	I0829 19:01:16.350194   50033 command_runner.go:130] > # ]
	I0829 19:01:16.350205   50033 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0829 19:01:16.350219   50033 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0829 19:01:16.350230   50033 command_runner.go:130] > # add_inheritable_capabilities = false
	I0829 19:01:16.350242   50033 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0829 19:01:16.350255   50033 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0829 19:01:16.350268   50033 command_runner.go:130] > default_sysctls = [
	I0829 19:01:16.350273   50033 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0829 19:01:16.350278   50033 command_runner.go:130] > ]
	I0829 19:01:16.350282   50033 command_runner.go:130] > # List of devices on the host that a
	I0829 19:01:16.350288   50033 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0829 19:01:16.350294   50033 command_runner.go:130] > # allowed_devices = [
	I0829 19:01:16.350297   50033 command_runner.go:130] > # 	"/dev/fuse",
	I0829 19:01:16.350301   50033 command_runner.go:130] > # ]
	I0829 19:01:16.350316   50033 command_runner.go:130] > # List of additional devices. specified as
	I0829 19:01:16.350330   50033 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0829 19:01:16.350339   50033 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0829 19:01:16.350351   50033 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0829 19:01:16.350367   50033 command_runner.go:130] > # additional_devices = [
	I0829 19:01:16.350376   50033 command_runner.go:130] > # ]
	I0829 19:01:16.350383   50033 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0829 19:01:16.350395   50033 command_runner.go:130] > # cdi_spec_dirs = [
	I0829 19:01:16.350404   50033 command_runner.go:130] > # 	"/etc/cdi",
	I0829 19:01:16.350410   50033 command_runner.go:130] > # 	"/var/run/cdi",
	I0829 19:01:16.350416   50033 command_runner.go:130] > # ]
	I0829 19:01:16.350427   50033 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0829 19:01:16.350439   50033 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0829 19:01:16.350449   50033 command_runner.go:130] > # Defaults to false.
	I0829 19:01:16.350456   50033 command_runner.go:130] > # device_ownership_from_security_context = false
	I0829 19:01:16.350467   50033 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0829 19:01:16.350475   50033 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0829 19:01:16.350479   50033 command_runner.go:130] > # hooks_dir = [
	I0829 19:01:16.350487   50033 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0829 19:01:16.350492   50033 command_runner.go:130] > # ]
	I0829 19:01:16.350505   50033 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0829 19:01:16.350518   50033 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0829 19:01:16.350529   50033 command_runner.go:130] > # its default mounts from the following two files:
	I0829 19:01:16.350536   50033 command_runner.go:130] > #
	I0829 19:01:16.350545   50033 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0829 19:01:16.350558   50033 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0829 19:01:16.350567   50033 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0829 19:01:16.350571   50033 command_runner.go:130] > #
	I0829 19:01:16.350583   50033 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0829 19:01:16.350596   50033 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0829 19:01:16.350609   50033 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0829 19:01:16.350620   50033 command_runner.go:130] > #      only add mounts it finds in this file.
	I0829 19:01:16.350625   50033 command_runner.go:130] > #
	I0829 19:01:16.350634   50033 command_runner.go:130] > # default_mounts_file = ""
	I0829 19:01:16.350644   50033 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0829 19:01:16.350656   50033 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0829 19:01:16.350670   50033 command_runner.go:130] > pids_limit = 1024
	I0829 19:01:16.350682   50033 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0829 19:01:16.350695   50033 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0829 19:01:16.350705   50033 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0829 19:01:16.350721   50033 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0829 19:01:16.350730   50033 command_runner.go:130] > # log_size_max = -1
	I0829 19:01:16.350741   50033 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0829 19:01:16.350752   50033 command_runner.go:130] > # log_to_journald = false
	I0829 19:01:16.350762   50033 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0829 19:01:16.350770   50033 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0829 19:01:16.350780   50033 command_runner.go:130] > # Path to directory for container attach sockets.
	I0829 19:01:16.350791   50033 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0829 19:01:16.350800   50033 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0829 19:01:16.350810   50033 command_runner.go:130] > # bind_mount_prefix = ""
	I0829 19:01:16.350818   50033 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0829 19:01:16.350827   50033 command_runner.go:130] > # read_only = false
	I0829 19:01:16.350836   50033 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0829 19:01:16.350848   50033 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0829 19:01:16.350857   50033 command_runner.go:130] > # live configuration reload.
	I0829 19:01:16.350864   50033 command_runner.go:130] > # log_level = "info"
	I0829 19:01:16.350871   50033 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0829 19:01:16.350881   50033 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:01:16.350888   50033 command_runner.go:130] > # log_filter = ""
	I0829 19:01:16.350900   50033 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0829 19:01:16.350913   50033 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0829 19:01:16.350922   50033 command_runner.go:130] > # separated by comma.
	I0829 19:01:16.350934   50033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:01:16.350943   50033 command_runner.go:130] > # uid_mappings = ""
	I0829 19:01:16.350949   50033 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0829 19:01:16.350959   50033 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0829 19:01:16.350969   50033 command_runner.go:130] > # separated by comma.
	I0829 19:01:16.350981   50033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:01:16.350990   50033 command_runner.go:130] > # gid_mappings = ""
	I0829 19:01:16.351004   50033 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0829 19:01:16.351015   50033 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0829 19:01:16.351027   50033 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0829 19:01:16.351043   50033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:01:16.351052   50033 command_runner.go:130] > # minimum_mappable_uid = -1
	I0829 19:01:16.351061   50033 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0829 19:01:16.351074   50033 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0829 19:01:16.351085   50033 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0829 19:01:16.351100   50033 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0829 19:01:16.351109   50033 command_runner.go:130] > # minimum_mappable_gid = -1
	I0829 19:01:16.351118   50033 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0829 19:01:16.351129   50033 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0829 19:01:16.351140   50033 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0829 19:01:16.351147   50033 command_runner.go:130] > # ctr_stop_timeout = 30
	I0829 19:01:16.351158   50033 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0829 19:01:16.351168   50033 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0829 19:01:16.351178   50033 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0829 19:01:16.351186   50033 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0829 19:01:16.351197   50033 command_runner.go:130] > drop_infra_ctr = false
	I0829 19:01:16.351206   50033 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0829 19:01:16.351218   50033 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0829 19:01:16.351231   50033 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0829 19:01:16.351240   50033 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0829 19:01:16.351248   50033 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0829 19:01:16.351257   50033 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0829 19:01:16.351262   50033 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0829 19:01:16.351268   50033 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0829 19:01:16.351271   50033 command_runner.go:130] > # shared_cpuset = ""
	I0829 19:01:16.351277   50033 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0829 19:01:16.351286   50033 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0829 19:01:16.351293   50033 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0829 19:01:16.351305   50033 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0829 19:01:16.351312   50033 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0829 19:01:16.351324   50033 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0829 19:01:16.351334   50033 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0829 19:01:16.351344   50033 command_runner.go:130] > # enable_criu_support = false
	I0829 19:01:16.351352   50033 command_runner.go:130] > # Enable/disable the generation of the container,
	I0829 19:01:16.351368   50033 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0829 19:01:16.351378   50033 command_runner.go:130] > # enable_pod_events = false
	I0829 19:01:16.351396   50033 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0829 19:01:16.351409   50033 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0829 19:01:16.351420   50033 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0829 19:01:16.351429   50033 command_runner.go:130] > # default_runtime = "runc"
	I0829 19:01:16.351437   50033 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0829 19:01:16.351447   50033 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0829 19:01:16.351460   50033 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0829 19:01:16.351466   50033 command_runner.go:130] > # creation as a file is not desired either.
	I0829 19:01:16.351475   50033 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0829 19:01:16.351489   50033 command_runner.go:130] > # the hostname is being managed dynamically.
	I0829 19:01:16.351499   50033 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0829 19:01:16.351505   50033 command_runner.go:130] > # ]
	I0829 19:01:16.351517   50033 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0829 19:01:16.351530   50033 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0829 19:01:16.351541   50033 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0829 19:01:16.351550   50033 command_runner.go:130] > # Each entry in the table should follow the format:
	I0829 19:01:16.351555   50033 command_runner.go:130] > #
	I0829 19:01:16.351559   50033 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0829 19:01:16.351566   50033 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0829 19:01:16.351609   50033 command_runner.go:130] > # runtime_type = "oci"
	I0829 19:01:16.351617   50033 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0829 19:01:16.351622   50033 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0829 19:01:16.351626   50033 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0829 19:01:16.351631   50033 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0829 19:01:16.351636   50033 command_runner.go:130] > # monitor_env = []
	I0829 19:01:16.351641   50033 command_runner.go:130] > # privileged_without_host_devices = false
	I0829 19:01:16.351647   50033 command_runner.go:130] > # allowed_annotations = []
	I0829 19:01:16.351652   50033 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0829 19:01:16.351657   50033 command_runner.go:130] > # Where:
	I0829 19:01:16.351663   50033 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0829 19:01:16.351670   50033 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0829 19:01:16.351679   50033 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0829 19:01:16.351687   50033 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0829 19:01:16.351694   50033 command_runner.go:130] > #   in $PATH.
	I0829 19:01:16.351700   50033 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0829 19:01:16.351707   50033 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0829 19:01:16.351716   50033 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0829 19:01:16.351722   50033 command_runner.go:130] > #   state.
	I0829 19:01:16.351728   50033 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0829 19:01:16.351735   50033 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0829 19:01:16.351741   50033 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0829 19:01:16.351749   50033 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0829 19:01:16.351758   50033 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0829 19:01:16.351766   50033 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0829 19:01:16.351772   50033 command_runner.go:130] > #   The currently recognized values are:
	I0829 19:01:16.351778   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0829 19:01:16.351787   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0829 19:01:16.351797   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0829 19:01:16.351805   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0829 19:01:16.351814   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0829 19:01:16.351822   50033 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0829 19:01:16.351828   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0829 19:01:16.351836   50033 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0829 19:01:16.351842   50033 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0829 19:01:16.351849   50033 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0829 19:01:16.351854   50033 command_runner.go:130] > #   deprecated option "conmon".
	I0829 19:01:16.351861   50033 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0829 19:01:16.351868   50033 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0829 19:01:16.351874   50033 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0829 19:01:16.351881   50033 command_runner.go:130] > #   should be moved to the container's cgroup
	I0829 19:01:16.351887   50033 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0829 19:01:16.351894   50033 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0829 19:01:16.351900   50033 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0829 19:01:16.351907   50033 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0829 19:01:16.351910   50033 command_runner.go:130] > #
	I0829 19:01:16.351914   50033 command_runner.go:130] > # Using the seccomp notifier feature:
	I0829 19:01:16.351917   50033 command_runner.go:130] > #
	I0829 19:01:16.351923   50033 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0829 19:01:16.351931   50033 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0829 19:01:16.351936   50033 command_runner.go:130] > #
	I0829 19:01:16.351942   50033 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0829 19:01:16.351950   50033 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0829 19:01:16.351958   50033 command_runner.go:130] > #
	I0829 19:01:16.351966   50033 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0829 19:01:16.351975   50033 command_runner.go:130] > # feature.
	I0829 19:01:16.351978   50033 command_runner.go:130] > #
	I0829 19:01:16.351988   50033 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0829 19:01:16.351995   50033 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0829 19:01:16.352003   50033 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0829 19:01:16.352011   50033 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0829 19:01:16.352017   50033 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0829 19:01:16.352028   50033 command_runner.go:130] > #
	I0829 19:01:16.352033   50033 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0829 19:01:16.352042   50033 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0829 19:01:16.352052   50033 command_runner.go:130] > #
	I0829 19:01:16.352058   50033 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0829 19:01:16.352065   50033 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0829 19:01:16.352069   50033 command_runner.go:130] > #
	I0829 19:01:16.352077   50033 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0829 19:01:16.352082   50033 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0829 19:01:16.352088   50033 command_runner.go:130] > # limitation.
	I0829 19:01:16.352092   50033 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0829 19:01:16.352098   50033 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0829 19:01:16.352102   50033 command_runner.go:130] > runtime_type = "oci"
	I0829 19:01:16.352108   50033 command_runner.go:130] > runtime_root = "/run/runc"
	I0829 19:01:16.352112   50033 command_runner.go:130] > runtime_config_path = ""
	I0829 19:01:16.352119   50033 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0829 19:01:16.352123   50033 command_runner.go:130] > monitor_cgroup = "pod"
	I0829 19:01:16.352129   50033 command_runner.go:130] > monitor_exec_cgroup = ""
	I0829 19:01:16.352133   50033 command_runner.go:130] > monitor_env = [
	I0829 19:01:16.352140   50033 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0829 19:01:16.352143   50033 command_runner.go:130] > ]
	I0829 19:01:16.352147   50033 command_runner.go:130] > privileged_without_host_devices = false
	I0829 19:01:16.352156   50033 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0829 19:01:16.352161   50033 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0829 19:01:16.352167   50033 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0829 19:01:16.352176   50033 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0829 19:01:16.352185   50033 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0829 19:01:16.352198   50033 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0829 19:01:16.352209   50033 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0829 19:01:16.352218   50033 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0829 19:01:16.352226   50033 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0829 19:01:16.352232   50033 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0829 19:01:16.352236   50033 command_runner.go:130] > # Example:
	I0829 19:01:16.352240   50033 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0829 19:01:16.352244   50033 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0829 19:01:16.352248   50033 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0829 19:01:16.352253   50033 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0829 19:01:16.352256   50033 command_runner.go:130] > # cpuset = 0
	I0829 19:01:16.352260   50033 command_runner.go:130] > # cpushares = "0-1"
	I0829 19:01:16.352263   50033 command_runner.go:130] > # Where:
	I0829 19:01:16.352270   50033 command_runner.go:130] > # The workload name is workload-type.
	I0829 19:01:16.352276   50033 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0829 19:01:16.352281   50033 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0829 19:01:16.352286   50033 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0829 19:01:16.352293   50033 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0829 19:01:16.352299   50033 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0829 19:01:16.352303   50033 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0829 19:01:16.352309   50033 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0829 19:01:16.352313   50033 command_runner.go:130] > # Default value is set to true
	I0829 19:01:16.352317   50033 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0829 19:01:16.352322   50033 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0829 19:01:16.352326   50033 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0829 19:01:16.352330   50033 command_runner.go:130] > # Default value is set to 'false'
	I0829 19:01:16.352334   50033 command_runner.go:130] > # disable_hostport_mapping = false
	I0829 19:01:16.352340   50033 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0829 19:01:16.352342   50033 command_runner.go:130] > #
	I0829 19:01:16.352348   50033 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0829 19:01:16.352354   50033 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0829 19:01:16.352362   50033 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0829 19:01:16.352368   50033 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0829 19:01:16.352373   50033 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0829 19:01:16.352376   50033 command_runner.go:130] > [crio.image]
	I0829 19:01:16.352382   50033 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0829 19:01:16.352390   50033 command_runner.go:130] > # default_transport = "docker://"
	I0829 19:01:16.352396   50033 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0829 19:01:16.352402   50033 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0829 19:01:16.352405   50033 command_runner.go:130] > # global_auth_file = ""
	I0829 19:01:16.352410   50033 command_runner.go:130] > # The image used to instantiate infra containers.
	I0829 19:01:16.352414   50033 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:01:16.352420   50033 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0829 19:01:16.352428   50033 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0829 19:01:16.352434   50033 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0829 19:01:16.352442   50033 command_runner.go:130] > # This option supports live configuration reload.
	I0829 19:01:16.352446   50033 command_runner.go:130] > # pause_image_auth_file = ""
	I0829 19:01:16.352454   50033 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0829 19:01:16.352459   50033 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0829 19:01:16.352469   50033 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0829 19:01:16.352476   50033 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0829 19:01:16.352483   50033 command_runner.go:130] > # pause_command = "/pause"
	I0829 19:01:16.352489   50033 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0829 19:01:16.352496   50033 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0829 19:01:16.352503   50033 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0829 19:01:16.352510   50033 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0829 19:01:16.352516   50033 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0829 19:01:16.352523   50033 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0829 19:01:16.352530   50033 command_runner.go:130] > # pinned_images = [
	I0829 19:01:16.352533   50033 command_runner.go:130] > # ]
	I0829 19:01:16.352540   50033 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0829 19:01:16.352546   50033 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0829 19:01:16.352555   50033 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0829 19:01:16.352561   50033 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0829 19:01:16.352568   50033 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0829 19:01:16.352572   50033 command_runner.go:130] > # signature_policy = ""
	I0829 19:01:16.352579   50033 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0829 19:01:16.352585   50033 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0829 19:01:16.352593   50033 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0829 19:01:16.352600   50033 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0829 19:01:16.352609   50033 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0829 19:01:16.352616   50033 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0829 19:01:16.352626   50033 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0829 19:01:16.352634   50033 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0829 19:01:16.352639   50033 command_runner.go:130] > # changing them here.
	I0829 19:01:16.352645   50033 command_runner.go:130] > # insecure_registries = [
	I0829 19:01:16.352648   50033 command_runner.go:130] > # ]
	I0829 19:01:16.352654   50033 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0829 19:01:16.352661   50033 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0829 19:01:16.352665   50033 command_runner.go:130] > # image_volumes = "mkdir"
	I0829 19:01:16.352672   50033 command_runner.go:130] > # Temporary directory to use for storing big files
	I0829 19:01:16.352676   50033 command_runner.go:130] > # big_files_temporary_dir = ""
	I0829 19:01:16.352684   50033 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0829 19:01:16.352689   50033 command_runner.go:130] > # CNI plugins.
	I0829 19:01:16.352693   50033 command_runner.go:130] > [crio.network]
	I0829 19:01:16.352700   50033 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0829 19:01:16.352708   50033 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0829 19:01:16.352714   50033 command_runner.go:130] > # cni_default_network = ""
	I0829 19:01:16.352720   50033 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0829 19:01:16.352726   50033 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0829 19:01:16.352732   50033 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0829 19:01:16.352737   50033 command_runner.go:130] > # plugin_dirs = [
	I0829 19:01:16.352741   50033 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0829 19:01:16.352746   50033 command_runner.go:130] > # ]
	I0829 19:01:16.352751   50033 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0829 19:01:16.352757   50033 command_runner.go:130] > [crio.metrics]
	I0829 19:01:16.352762   50033 command_runner.go:130] > # Globally enable or disable metrics support.
	I0829 19:01:16.352768   50033 command_runner.go:130] > enable_metrics = true
	I0829 19:01:16.352772   50033 command_runner.go:130] > # Specify enabled metrics collectors.
	I0829 19:01:16.352778   50033 command_runner.go:130] > # Per default all metrics are enabled.
	I0829 19:01:16.352784   50033 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0829 19:01:16.352792   50033 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0829 19:01:16.352800   50033 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0829 19:01:16.352803   50033 command_runner.go:130] > # metrics_collectors = [
	I0829 19:01:16.352809   50033 command_runner.go:130] > # 	"operations",
	I0829 19:01:16.352814   50033 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0829 19:01:16.352821   50033 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0829 19:01:16.352824   50033 command_runner.go:130] > # 	"operations_errors",
	I0829 19:01:16.352833   50033 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0829 19:01:16.352840   50033 command_runner.go:130] > # 	"image_pulls_by_name",
	I0829 19:01:16.352844   50033 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0829 19:01:16.352850   50033 command_runner.go:130] > # 	"image_pulls_failures",
	I0829 19:01:16.352855   50033 command_runner.go:130] > # 	"image_pulls_successes",
	I0829 19:01:16.352861   50033 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0829 19:01:16.352865   50033 command_runner.go:130] > # 	"image_layer_reuse",
	I0829 19:01:16.352871   50033 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0829 19:01:16.352876   50033 command_runner.go:130] > # 	"containers_oom_total",
	I0829 19:01:16.352882   50033 command_runner.go:130] > # 	"containers_oom",
	I0829 19:01:16.352886   50033 command_runner.go:130] > # 	"processes_defunct",
	I0829 19:01:16.352892   50033 command_runner.go:130] > # 	"operations_total",
	I0829 19:01:16.352896   50033 command_runner.go:130] > # 	"operations_latency_seconds",
	I0829 19:01:16.352902   50033 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0829 19:01:16.352906   50033 command_runner.go:130] > # 	"operations_errors_total",
	I0829 19:01:16.352912   50033 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0829 19:01:16.352916   50033 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0829 19:01:16.352922   50033 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0829 19:01:16.352926   50033 command_runner.go:130] > # 	"image_pulls_success_total",
	I0829 19:01:16.352932   50033 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0829 19:01:16.352936   50033 command_runner.go:130] > # 	"containers_oom_count_total",
	I0829 19:01:16.352943   50033 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0829 19:01:16.352947   50033 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0829 19:01:16.352954   50033 command_runner.go:130] > # ]
	I0829 19:01:16.352960   50033 command_runner.go:130] > # The port on which the metrics server will listen.
	I0829 19:01:16.352966   50033 command_runner.go:130] > # metrics_port = 9090
	I0829 19:01:16.352972   50033 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0829 19:01:16.352977   50033 command_runner.go:130] > # metrics_socket = ""
	I0829 19:01:16.352982   50033 command_runner.go:130] > # The certificate for the secure metrics server.
	I0829 19:01:16.352990   50033 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0829 19:01:16.352996   50033 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0829 19:01:16.353003   50033 command_runner.go:130] > # certificate on any modification event.
	I0829 19:01:16.353006   50033 command_runner.go:130] > # metrics_cert = ""
	I0829 19:01:16.353011   50033 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0829 19:01:16.353018   50033 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0829 19:01:16.353022   50033 command_runner.go:130] > # metrics_key = ""
	I0829 19:01:16.353033   50033 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0829 19:01:16.353039   50033 command_runner.go:130] > [crio.tracing]
	I0829 19:01:16.353044   50033 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0829 19:01:16.353050   50033 command_runner.go:130] > # enable_tracing = false
	I0829 19:01:16.353058   50033 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0829 19:01:16.353064   50033 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0829 19:01:16.353070   50033 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0829 19:01:16.353077   50033 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0829 19:01:16.353081   50033 command_runner.go:130] > # CRI-O NRI configuration.
	I0829 19:01:16.353087   50033 command_runner.go:130] > [crio.nri]
	I0829 19:01:16.353091   50033 command_runner.go:130] > # Globally enable or disable NRI.
	I0829 19:01:16.353095   50033 command_runner.go:130] > # enable_nri = false
	I0829 19:01:16.353099   50033 command_runner.go:130] > # NRI socket to listen on.
	I0829 19:01:16.353107   50033 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0829 19:01:16.353113   50033 command_runner.go:130] > # NRI plugin directory to use.
	I0829 19:01:16.353124   50033 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0829 19:01:16.353133   50033 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0829 19:01:16.353140   50033 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0829 19:01:16.353145   50033 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0829 19:01:16.353152   50033 command_runner.go:130] > # nri_disable_connections = false
	I0829 19:01:16.353157   50033 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0829 19:01:16.353164   50033 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0829 19:01:16.353169   50033 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0829 19:01:16.353175   50033 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0829 19:01:16.353181   50033 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0829 19:01:16.353186   50033 command_runner.go:130] > [crio.stats]
	I0829 19:01:16.353192   50033 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0829 19:01:16.353199   50033 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0829 19:01:16.353205   50033 command_runner.go:130] > # stats_collection_period = 0
	I0829 19:01:16.353228   50033 command_runner.go:130] ! time="2024-08-29 19:01:16.312144775Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0829 19:01:16.353250   50033 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0829 19:01:16.353395   50033 cni.go:84] Creating CNI manager for ""
	I0829 19:01:16.353414   50033 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0829 19:01:16.353440   50033 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:01:16.353473   50033 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.171 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-922931 NodeName:multinode-922931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:01:16.353667   50033 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-922931"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:01:16.353739   50033 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:01:16.363176   50033 command_runner.go:130] > kubeadm
	I0829 19:01:16.363190   50033 command_runner.go:130] > kubectl
	I0829 19:01:16.363195   50033 command_runner.go:130] > kubelet
	I0829 19:01:16.363215   50033 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:01:16.363269   50033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:01:16.372175   50033 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0829 19:01:16.387722   50033 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:01:16.402868   50033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0829 19:01:16.417844   50033 ssh_runner.go:195] Run: grep 192.168.39.171	control-plane.minikube.internal$ /etc/hosts
	I0829 19:01:16.421328   50033 command_runner.go:130] > 192.168.39.171	control-plane.minikube.internal
	I0829 19:01:16.421458   50033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:01:16.562663   50033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:01:16.576752   50033 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931 for IP: 192.168.39.171
	I0829 19:01:16.576774   50033 certs.go:194] generating shared ca certs ...
	I0829 19:01:16.576800   50033 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:01:16.576968   50033 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:01:16.577021   50033 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:01:16.577035   50033 certs.go:256] generating profile certs ...
	I0829 19:01:16.577187   50033 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/client.key
	I0829 19:01:16.577244   50033 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/apiserver.key.a63428f4
	I0829 19:01:16.577274   50033 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/proxy-client.key
	I0829 19:01:16.577282   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0829 19:01:16.577293   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0829 19:01:16.577310   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0829 19:01:16.577322   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0829 19:01:16.577340   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0829 19:01:16.577355   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0829 19:01:16.577369   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0829 19:01:16.577378   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0829 19:01:16.577441   50033 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:01:16.577482   50033 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:01:16.577497   50033 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:01:16.577524   50033 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:01:16.577550   50033 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:01:16.577583   50033 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:01:16.577643   50033 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:01:16.577675   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> /usr/share/ca-certificates/202592.pem
	I0829 19:01:16.577696   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:01:16.577715   50033 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem -> /usr/share/ca-certificates/20259.pem
	I0829 19:01:16.578382   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:01:16.601162   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:01:16.624283   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:01:16.646958   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:01:16.668846   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:01:16.691137   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:01:16.712989   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:01:16.736142   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/multinode-922931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:01:16.758429   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:01:16.780429   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:01:16.802946   50033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:01:16.824359   50033 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:01:16.839667   50033 ssh_runner.go:195] Run: openssl version
	I0829 19:01:16.845224   50033 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0829 19:01:16.845284   50033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:01:16.855340   50033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:01:16.859379   50033 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:01:16.859471   50033 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:01:16.859534   50033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:01:16.864634   50033 command_runner.go:130] > b5213941
	I0829 19:01:16.864811   50033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:01:16.875614   50033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:01:16.886281   50033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:01:16.890536   50033 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:01:16.890568   50033 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:01:16.890633   50033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:01:16.896398   50033 command_runner.go:130] > 51391683
	I0829 19:01:16.896565   50033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:01:16.906370   50033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:01:16.918304   50033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:01:16.923130   50033 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:01:16.923157   50033 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:01:16.923214   50033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:01:16.929297   50033 command_runner.go:130] > 3ec20f2e
	I0829 19:01:16.929387   50033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:01:16.940746   50033 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:01:16.945381   50033 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:01:16.945411   50033 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0829 19:01:16.945421   50033 command_runner.go:130] > Device: 253,1	Inode: 4197398     Links: 1
	I0829 19:01:16.945430   50033 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0829 19:01:16.945438   50033 command_runner.go:130] > Access: 2024-08-29 18:54:26.787101614 +0000
	I0829 19:01:16.945447   50033 command_runner.go:130] > Modify: 2024-08-29 18:54:26.787101614 +0000
	I0829 19:01:16.945455   50033 command_runner.go:130] > Change: 2024-08-29 18:54:26.787101614 +0000
	I0829 19:01:16.945462   50033 command_runner.go:130] >  Birth: 2024-08-29 18:54:26.787101614 +0000
	I0829 19:01:16.945549   50033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:01:16.951058   50033 command_runner.go:130] > Certificate will not expire
	I0829 19:01:16.951229   50033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:01:16.956737   50033 command_runner.go:130] > Certificate will not expire
	I0829 19:01:16.956847   50033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:01:16.962074   50033 command_runner.go:130] > Certificate will not expire
	I0829 19:01:16.962167   50033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:01:16.967444   50033 command_runner.go:130] > Certificate will not expire
	I0829 19:01:16.967502   50033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:01:16.972638   50033 command_runner.go:130] > Certificate will not expire
	I0829 19:01:16.972697   50033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:01:16.977738   50033 command_runner.go:130] > Certificate will not expire
	I0829 19:01:16.977912   50033 kubeadm.go:392] StartCluster: {Name:multinode-922931 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-922931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.226 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:01:16.978012   50033 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:01:16.978069   50033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:01:17.014228   50033 command_runner.go:130] > e9e301ed91cbcd41c9e71c1aa03757e92221389ca0a1c9c52fd306b1250a21e0
	I0829 19:01:17.014253   50033 command_runner.go:130] > 621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655
	I0829 19:01:17.014259   50033 command_runner.go:130] > 04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba
	I0829 19:01:17.014266   50033 command_runner.go:130] > f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626
	I0829 19:01:17.014271   50033 command_runner.go:130] > 629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632
	I0829 19:01:17.014277   50033 command_runner.go:130] > 4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2
	I0829 19:01:17.014282   50033 command_runner.go:130] > 7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547
	I0829 19:01:17.014291   50033 command_runner.go:130] > 03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e
	I0829 19:01:17.014308   50033 cri.go:89] found id: "e9e301ed91cbcd41c9e71c1aa03757e92221389ca0a1c9c52fd306b1250a21e0"
	I0829 19:01:17.014315   50033 cri.go:89] found id: "621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655"
	I0829 19:01:17.014318   50033 cri.go:89] found id: "04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba"
	I0829 19:01:17.014321   50033 cri.go:89] found id: "f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626"
	I0829 19:01:17.014324   50033 cri.go:89] found id: "629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632"
	I0829 19:01:17.014328   50033 cri.go:89] found id: "4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2"
	I0829 19:01:17.014334   50033 cri.go:89] found id: "7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547"
	I0829 19:01:17.014337   50033 cri.go:89] found id: "03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e"
	I0829 19:01:17.014340   50033 cri.go:89] found id: ""
	I0829 19:01:17.014377   50033 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.319351923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958327319328922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed9d634d-d9b5-429c-8a9a-7f9a4023621b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.319889046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95f3f261-f039-43a7-a927-42087c9c610c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.319953044Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95f3f261-f039-43a7-a927-42087c9c610c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.320564504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c6c25057b1e25519e64591d4656700d0319638d73916ddc9a1b94f268feb8d8,PodSandboxId:1bd222c5d9be5f32c59a8bf6e60f4a3c8c4aa7193250c819edf8fa4a44236975,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724958117072932138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51cf7b4a2b4277b833fddb33fd5bca910084eabbfd9fab545d3564e743702116,PodSandboxId:5a86fb7a38c786dda92b56c631932295c47f14e17ffb1580de17b5655c7ac294,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724958083503615280,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5f3cfcde3f89eb9823227944688b86c9781b7a2b5735717466999fe3596038,PodSandboxId:695f41f3f98e025279dc8ecaa7f2403c8aef2f18ff1ad2f199892683305502bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724958083389773593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8aee643ba50193e37851993c8871e41a13c5fab876e4ca1ea3c17bf44c3ef94,PodSandboxId:63f0b32e2107d6ea3fa065ddb87b48ec77fc19d7bc8bf305b3c3475907691cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724958083335582882,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cdf127e95844cf2235e2cacadfdbcfff140ba972d78bf0d7c48956194fe77c0,PodSandboxId:414052dff31ade6682e4ed2f1626d4d228b0c4bab5294532a3daff3d63f29867,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724958083319989504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99afe537efacfbfbd23e4a327a7839185834c590327290ee8f584b802064b857,PodSandboxId:ce2b22b804b80cd5dae53b67442ac02f8500e8ea12cdec571232da157b3ac936,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724958079525231197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08466cf1de50ca3a78f8fc03c118b634b73459a82a36469ee77808d8b83164ad,PodSandboxId:e781e93e250a05870f040ea4e424d861e10a96f06f693186b3a7a112b8bd509d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724958079491386105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b860fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6a4434171610963f1f43e580883fe71e0d47f058737130fe8ae970e0cf41e6,PodSandboxId:3bcc04bcf1ef3e323eebcf5dbd3c844c2e9f8fb3516f6c1f308d57da0f763bb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724958079506528581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd16207ad3b78365f5336ee8ac71b51672477a007a026be84553bb048a74af5e,PodSandboxId:eed4a8d43471ac84afb8f24e557355ce3973a063b76a45bcb8bb99e3fe443867,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724958079486879754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f12adc01e898985706f5ce770d0c2f094f7ac33f8994e5174acac97c7279fe,PodSandboxId:64b9f8a3ebc854154b624612916df26cdd3c7de4ac6a42e4d2fc8374c985fd3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724957755929337620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e301ed91cbcd41c9e71c1aa03757e92221389ca0a1c9c52fd306b1250a21e0,PodSandboxId:942a26718b07f5a7975c0e889c247bdefc2b6795d982e46240d01598c7c1c8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724957697069238738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655,PodSandboxId:c99b505d069c472ea587d06bc6d260c286e5ba26167704fa68c753da43ba4cb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724957697044806289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba,PodSandboxId:1253fca3a9769def91dcb35aef9aa1eb2f6e52affdaf79fa683ca80a143eb11a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724957685330788894,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626,PodSandboxId:93f539efa4b7e1bad827dcf9efb521fd1e0cf9a4a9ed203d2af34e459e5389eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724957682444206159,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632,PodSandboxId:d196c525bb040c3e427c1ebed44e72a34b1ea2bb3cce423716d196ba57ddd5d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724957669976984817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
60fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2,PodSandboxId:38fc47d5fa2711024481449306414944648c7d64ffdb89b6ac93982586f74de8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724957669948698948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547,PodSandboxId:cfbf9fbe56462978abd3c5bd244a6ae220884a9010324204206d3d9ed9055134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724957669927613181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e,PodSandboxId:0b4951ffcb41e4f9e95142133bdc615cf7aba52eca2e75882c6491b9ee24db88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724957669881879392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95f3f261-f039-43a7-a927-42087c9c610c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.360879731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efffb16e-d637-4a82-ba56-9b088f3386bb name=/runtime.v1.RuntimeService/Version
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.360980613Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efffb16e-d637-4a82-ba56-9b088f3386bb name=/runtime.v1.RuntimeService/Version
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.361899956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d17c830-f342-4377-91fd-f432da5461ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.362566636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958327362542923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d17c830-f342-4377-91fd-f432da5461ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.363064226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0fff262-831e-47a2-a721-022d507e27dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.363174485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0fff262-831e-47a2-a721-022d507e27dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.363691888Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c6c25057b1e25519e64591d4656700d0319638d73916ddc9a1b94f268feb8d8,PodSandboxId:1bd222c5d9be5f32c59a8bf6e60f4a3c8c4aa7193250c819edf8fa4a44236975,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724958117072932138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51cf7b4a2b4277b833fddb33fd5bca910084eabbfd9fab545d3564e743702116,PodSandboxId:5a86fb7a38c786dda92b56c631932295c47f14e17ffb1580de17b5655c7ac294,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724958083503615280,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5f3cfcde3f89eb9823227944688b86c9781b7a2b5735717466999fe3596038,PodSandboxId:695f41f3f98e025279dc8ecaa7f2403c8aef2f18ff1ad2f199892683305502bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724958083389773593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8aee643ba50193e37851993c8871e41a13c5fab876e4ca1ea3c17bf44c3ef94,PodSandboxId:63f0b32e2107d6ea3fa065ddb87b48ec77fc19d7bc8bf305b3c3475907691cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724958083335582882,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cdf127e95844cf2235e2cacadfdbcfff140ba972d78bf0d7c48956194fe77c0,PodSandboxId:414052dff31ade6682e4ed2f1626d4d228b0c4bab5294532a3daff3d63f29867,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724958083319989504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99afe537efacfbfbd23e4a327a7839185834c590327290ee8f584b802064b857,PodSandboxId:ce2b22b804b80cd5dae53b67442ac02f8500e8ea12cdec571232da157b3ac936,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724958079525231197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08466cf1de50ca3a78f8fc03c118b634b73459a82a36469ee77808d8b83164ad,PodSandboxId:e781e93e250a05870f040ea4e424d861e10a96f06f693186b3a7a112b8bd509d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724958079491386105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b860fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6a4434171610963f1f43e580883fe71e0d47f058737130fe8ae970e0cf41e6,PodSandboxId:3bcc04bcf1ef3e323eebcf5dbd3c844c2e9f8fb3516f6c1f308d57da0f763bb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724958079506528581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd16207ad3b78365f5336ee8ac71b51672477a007a026be84553bb048a74af5e,PodSandboxId:eed4a8d43471ac84afb8f24e557355ce3973a063b76a45bcb8bb99e3fe443867,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724958079486879754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f12adc01e898985706f5ce770d0c2f094f7ac33f8994e5174acac97c7279fe,PodSandboxId:64b9f8a3ebc854154b624612916df26cdd3c7de4ac6a42e4d2fc8374c985fd3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724957755929337620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e301ed91cbcd41c9e71c1aa03757e92221389ca0a1c9c52fd306b1250a21e0,PodSandboxId:942a26718b07f5a7975c0e889c247bdefc2b6795d982e46240d01598c7c1c8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724957697069238738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655,PodSandboxId:c99b505d069c472ea587d06bc6d260c286e5ba26167704fa68c753da43ba4cb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724957697044806289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba,PodSandboxId:1253fca3a9769def91dcb35aef9aa1eb2f6e52affdaf79fa683ca80a143eb11a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724957685330788894,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626,PodSandboxId:93f539efa4b7e1bad827dcf9efb521fd1e0cf9a4a9ed203d2af34e459e5389eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724957682444206159,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632,PodSandboxId:d196c525bb040c3e427c1ebed44e72a34b1ea2bb3cce423716d196ba57ddd5d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724957669976984817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
60fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2,PodSandboxId:38fc47d5fa2711024481449306414944648c7d64ffdb89b6ac93982586f74de8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724957669948698948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547,PodSandboxId:cfbf9fbe56462978abd3c5bd244a6ae220884a9010324204206d3d9ed9055134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724957669927613181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e,PodSandboxId:0b4951ffcb41e4f9e95142133bdc615cf7aba52eca2e75882c6491b9ee24db88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724957669881879392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0fff262-831e-47a2-a721-022d507e27dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.401447556Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=674bd3d2-c438-443a-a1e8-a373d71ef300 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.401538734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=674bd3d2-c438-443a-a1e8-a373d71ef300 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.402713408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74a820e5-5aa1-43cf-a213-5e8818e97742 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.403311046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958327403286677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74a820e5-5aa1-43cf-a213-5e8818e97742 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.403763363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6493d49-2ca8-48b8-aba6-4048794bbe31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.403821427Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6493d49-2ca8-48b8-aba6-4048794bbe31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.404313549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c6c25057b1e25519e64591d4656700d0319638d73916ddc9a1b94f268feb8d8,PodSandboxId:1bd222c5d9be5f32c59a8bf6e60f4a3c8c4aa7193250c819edf8fa4a44236975,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724958117072932138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51cf7b4a2b4277b833fddb33fd5bca910084eabbfd9fab545d3564e743702116,PodSandboxId:5a86fb7a38c786dda92b56c631932295c47f14e17ffb1580de17b5655c7ac294,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724958083503615280,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5f3cfcde3f89eb9823227944688b86c9781b7a2b5735717466999fe3596038,PodSandboxId:695f41f3f98e025279dc8ecaa7f2403c8aef2f18ff1ad2f199892683305502bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724958083389773593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8aee643ba50193e37851993c8871e41a13c5fab876e4ca1ea3c17bf44c3ef94,PodSandboxId:63f0b32e2107d6ea3fa065ddb87b48ec77fc19d7bc8bf305b3c3475907691cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724958083335582882,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cdf127e95844cf2235e2cacadfdbcfff140ba972d78bf0d7c48956194fe77c0,PodSandboxId:414052dff31ade6682e4ed2f1626d4d228b0c4bab5294532a3daff3d63f29867,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724958083319989504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99afe537efacfbfbd23e4a327a7839185834c590327290ee8f584b802064b857,PodSandboxId:ce2b22b804b80cd5dae53b67442ac02f8500e8ea12cdec571232da157b3ac936,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724958079525231197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08466cf1de50ca3a78f8fc03c118b634b73459a82a36469ee77808d8b83164ad,PodSandboxId:e781e93e250a05870f040ea4e424d861e10a96f06f693186b3a7a112b8bd509d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724958079491386105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b860fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6a4434171610963f1f43e580883fe71e0d47f058737130fe8ae970e0cf41e6,PodSandboxId:3bcc04bcf1ef3e323eebcf5dbd3c844c2e9f8fb3516f6c1f308d57da0f763bb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724958079506528581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd16207ad3b78365f5336ee8ac71b51672477a007a026be84553bb048a74af5e,PodSandboxId:eed4a8d43471ac84afb8f24e557355ce3973a063b76a45bcb8bb99e3fe443867,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724958079486879754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f12adc01e898985706f5ce770d0c2f094f7ac33f8994e5174acac97c7279fe,PodSandboxId:64b9f8a3ebc854154b624612916df26cdd3c7de4ac6a42e4d2fc8374c985fd3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724957755929337620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e301ed91cbcd41c9e71c1aa03757e92221389ca0a1c9c52fd306b1250a21e0,PodSandboxId:942a26718b07f5a7975c0e889c247bdefc2b6795d982e46240d01598c7c1c8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724957697069238738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655,PodSandboxId:c99b505d069c472ea587d06bc6d260c286e5ba26167704fa68c753da43ba4cb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724957697044806289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba,PodSandboxId:1253fca3a9769def91dcb35aef9aa1eb2f6e52affdaf79fa683ca80a143eb11a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724957685330788894,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626,PodSandboxId:93f539efa4b7e1bad827dcf9efb521fd1e0cf9a4a9ed203d2af34e459e5389eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724957682444206159,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632,PodSandboxId:d196c525bb040c3e427c1ebed44e72a34b1ea2bb3cce423716d196ba57ddd5d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724957669976984817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
60fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2,PodSandboxId:38fc47d5fa2711024481449306414944648c7d64ffdb89b6ac93982586f74de8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724957669948698948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547,PodSandboxId:cfbf9fbe56462978abd3c5bd244a6ae220884a9010324204206d3d9ed9055134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724957669927613181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e,PodSandboxId:0b4951ffcb41e4f9e95142133bdc615cf7aba52eca2e75882c6491b9ee24db88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724957669881879392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6493d49-2ca8-48b8-aba6-4048794bbe31 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.443941504Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c627583f-ef60-4c39-8319-ad8e601a39f2 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.444019087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c627583f-ef60-4c39-8319-ad8e601a39f2 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.445542400Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=348f96f6-bb9e-494b-a3be-d5dcc26d317a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.445940504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958327445918956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=348f96f6-bb9e-494b-a3be-d5dcc26d317a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.446654035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0cb4da5b-606a-4193-b2ac-9642824fce9f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.446706961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0cb4da5b-606a-4193-b2ac-9642824fce9f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:05:27 multinode-922931 crio[2716]: time="2024-08-29 19:05:27.447038625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c6c25057b1e25519e64591d4656700d0319638d73916ddc9a1b94f268feb8d8,PodSandboxId:1bd222c5d9be5f32c59a8bf6e60f4a3c8c4aa7193250c819edf8fa4a44236975,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724958117072932138,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51cf7b4a2b4277b833fddb33fd5bca910084eabbfd9fab545d3564e743702116,PodSandboxId:5a86fb7a38c786dda92b56c631932295c47f14e17ffb1580de17b5655c7ac294,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724958083503615280,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5f3cfcde3f89eb9823227944688b86c9781b7a2b5735717466999fe3596038,PodSandboxId:695f41f3f98e025279dc8ecaa7f2403c8aef2f18ff1ad2f199892683305502bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724958083389773593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8aee643ba50193e37851993c8871e41a13c5fab876e4ca1ea3c17bf44c3ef94,PodSandboxId:63f0b32e2107d6ea3fa065ddb87b48ec77fc19d7bc8bf305b3c3475907691cb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724958083335582882,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cdf127e95844cf2235e2cacadfdbcfff140ba972d78bf0d7c48956194fe77c0,PodSandboxId:414052dff31ade6682e4ed2f1626d4d228b0c4bab5294532a3daff3d63f29867,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724958083319989504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99afe537efacfbfbd23e4a327a7839185834c590327290ee8f584b802064b857,PodSandboxId:ce2b22b804b80cd5dae53b67442ac02f8500e8ea12cdec571232da157b3ac936,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724958079525231197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08466cf1de50ca3a78f8fc03c118b634b73459a82a36469ee77808d8b83164ad,PodSandboxId:e781e93e250a05870f040ea4e424d861e10a96f06f693186b3a7a112b8bd509d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724958079491386105,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b860fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf6a4434171610963f1f43e580883fe71e0d47f058737130fe8ae970e0cf41e6,PodSandboxId:3bcc04bcf1ef3e323eebcf5dbd3c844c2e9f8fb3516f6c1f308d57da0f763bb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724958079506528581,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd16207ad3b78365f5336ee8ac71b51672477a007a026be84553bb048a74af5e,PodSandboxId:eed4a8d43471ac84afb8f24e557355ce3973a063b76a45bcb8bb99e3fe443867,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724958079486879754,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f12adc01e898985706f5ce770d0c2f094f7ac33f8994e5174acac97c7279fe,PodSandboxId:64b9f8a3ebc854154b624612916df26cdd3c7de4ac6a42e4d2fc8374c985fd3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724957755929337620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9dk5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c948c1ad-9ddf-4518-82e8-2bddad735667,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e301ed91cbcd41c9e71c1aa03757e92221389ca0a1c9c52fd306b1250a21e0,PodSandboxId:942a26718b07f5a7975c0e889c247bdefc2b6795d982e46240d01598c7c1c8cc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724957697069238738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f61f623-598d-49a4-96f3-e8458a94432d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655,PodSandboxId:c99b505d069c472ea587d06bc6d260c286e5ba26167704fa68c753da43ba4cb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724957697044806289,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-m5hh2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d37f71c-00b6-4725-8b5e-8014993dd057,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba,PodSandboxId:1253fca3a9769def91dcb35aef9aa1eb2f6e52affdaf79fa683ca80a143eb11a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724957685330788894,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xt8rz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 39ad8429-f82d-40b2-9d5a-f9fd4f36f525,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626,PodSandboxId:93f539efa4b7e1bad827dcf9efb521fd1e0cf9a4a9ed203d2af34e459e5389eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724957682444206159,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-flq24,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 62880a62-5e17-4fe0-973c-26fc94f0fea2,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632,PodSandboxId:d196c525bb040c3e427c1ebed44e72a34b1ea2bb3cce423716d196ba57ddd5d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724957669976984817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
60fc526ebadf25d5ed9ab3a571a081,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2,PodSandboxId:38fc47d5fa2711024481449306414944648c7d64ffdb89b6ac93982586f74de8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724957669948698948,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8389bb6da7de24f38ae42727e6c12a6,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547,PodSandboxId:cfbf9fbe56462978abd3c5bd244a6ae220884a9010324204206d3d9ed9055134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724957669927613181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc99793b364391e874a58acd0561e338,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e,PodSandboxId:0b4951ffcb41e4f9e95142133bdc615cf7aba52eca2e75882c6491b9ee24db88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724957669881879392,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922931,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e67e8e32ee1e5831bbef69ea38a32d,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0cb4da5b-606a-4193-b2ac-9642824fce9f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4c6c25057b1e2       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   1bd222c5d9be5       busybox-7dff88458-9dk5v
	51cf7b4a2b427       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   5a86fb7a38c78       kindnet-xt8rz
	2e5f3cfcde3f8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   695f41f3f98e0       coredns-6f6b679f8f-m5hh2
	b8aee643ba501       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   63f0b32e2107d       kube-proxy-flq24
	9cdf127e95844       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   414052dff31ad       storage-provisioner
	99afe537efacf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   ce2b22b804b80       etcd-multinode-922931
	bf6a443417161       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   3bcc04bcf1ef3       kube-apiserver-multinode-922931
	08466cf1de50c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   e781e93e250a0       kube-scheduler-multinode-922931
	cd16207ad3b78       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   eed4a8d43471a       kube-controller-manager-multinode-922931
	d6f12adc01e89       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   64b9f8a3ebc85       busybox-7dff88458-9dk5v
	e9e301ed91cbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   942a26718b07f       storage-provisioner
	621daeb85eedc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   c99b505d069c4       coredns-6f6b679f8f-m5hh2
	04ed982a9d246       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   1253fca3a9769       kindnet-xt8rz
	f0c82b2494ec0       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   93f539efa4b7e       kube-proxy-flq24
	629bd4d21adaa       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   d196c525bb040       kube-scheduler-multinode-922931
	4a17f1421a093       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   38fc47d5fa271       etcd-multinode-922931
	7867424ad4b04       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   cfbf9fbe56462       kube-apiserver-multinode-922931
	03ed977ad4a1d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   0b4951ffcb41e       kube-controller-manager-multinode-922931
	
	
	==> coredns [2e5f3cfcde3f89eb9823227944688b86c9781b7a2b5735717466999fe3596038] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60174 - 38567 "HINFO IN 7562962971531487601.846825696782145744. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010918715s
	
	
	==> coredns [621daeb85eedca9331ccbff8c6a458de5b0a38d3fdf3c99da124de045a9e3655] <==
	[INFO] 10.244.1.2:55711 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00170629s
	[INFO] 10.244.1.2:56002 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090237s
	[INFO] 10.244.1.2:56224 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083937s
	[INFO] 10.244.1.2:47220 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00127978s
	[INFO] 10.244.1.2:55457 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059987s
	[INFO] 10.244.1.2:58382 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058695s
	[INFO] 10.244.1.2:33750 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055373s
	[INFO] 10.244.0.3:38554 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000085421s
	[INFO] 10.244.0.3:52406 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045679s
	[INFO] 10.244.0.3:56898 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000032164s
	[INFO] 10.244.0.3:60906 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000027701s
	[INFO] 10.244.1.2:38162 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127575s
	[INFO] 10.244.1.2:49352 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000253324s
	[INFO] 10.244.1.2:52799 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082134s
	[INFO] 10.244.1.2:42108 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072074s
	[INFO] 10.244.0.3:35094 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150276s
	[INFO] 10.244.0.3:45459 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128965s
	[INFO] 10.244.0.3:39657 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097045s
	[INFO] 10.244.0.3:49961 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129456s
	[INFO] 10.244.1.2:37634 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012354s
	[INFO] 10.244.1.2:33698 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000096537s
	[INFO] 10.244.1.2:59430 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000099446s
	[INFO] 10.244.1.2:56304 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069733s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-922931
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-922931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=multinode-922931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_54_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:54:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-922931
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:05:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:01:22 +0000   Thu, 29 Aug 2024 18:54:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:01:22 +0000   Thu, 29 Aug 2024 18:54:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:01:22 +0000   Thu, 29 Aug 2024 18:54:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:01:22 +0000   Thu, 29 Aug 2024 18:54:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.171
	  Hostname:    multinode-922931
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 549624820dca455195f0e270dd2e4862
	  System UUID:                54962482-0dca-4551-95f0-e270dd2e4862
	  Boot ID:                    60f7d0bc-602e-4968-9053-47600fbbdc39
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9dk5v                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	  kube-system                 coredns-6f6b679f8f-m5hh2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-922931                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-xt8rz                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-922931             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-922931    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-flq24                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-922931             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-922931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-922931 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-922931 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-922931 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-922931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-922931 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-922931 event: Registered Node multinode-922931 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-922931 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node multinode-922931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node multinode-922931 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node multinode-922931 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node multinode-922931 event: Registered Node multinode-922931 in Controller
	
	
	Name:               multinode-922931-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-922931-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=multinode-922931
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_29T19_02_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:02:04 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-922931-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:03:06 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 29 Aug 2024 19:02:35 +0000   Thu, 29 Aug 2024 19:03:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 29 Aug 2024 19:02:35 +0000   Thu, 29 Aug 2024 19:03:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 29 Aug 2024 19:02:35 +0000   Thu, 29 Aug 2024 19:03:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 29 Aug 2024 19:02:35 +0000   Thu, 29 Aug 2024 19:03:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    multinode-922931-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b7310b531df4a21aff9008f8b255f25
	  System UUID:                1b7310b5-31df-4a21-aff9-008f8b255f25
	  Boot ID:                    fcbcc64b-1873-4bf4-9ca3-c44e7ea8d40b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p68kf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 kindnet-6qfwv              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m58s
	  kube-system                 kube-proxy-qwdcr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m18s                  kube-proxy       
	  Normal  Starting                 9m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m58s (x2 over 9m59s)  kubelet          Node multinode-922931-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m58s (x2 over 9m59s)  kubelet          Node multinode-922931-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m58s (x2 over 9m59s)  kubelet          Node multinode-922931-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m38s                  kubelet          Node multinode-922931-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m23s (x2 over 3m23s)  kubelet          Node multinode-922931-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x2 over 3m23s)  kubelet          Node multinode-922931-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x2 over 3m23s)  kubelet          Node multinode-922931-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m3s                   kubelet          Node multinode-922931-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-922931-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.054877] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.162387] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.131491] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.245873] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +3.795028] systemd-fstab-generator[742]: Ignoring "noauto" option for root device
	[  +2.906864] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +0.062400] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.935676] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.089844] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.568964] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +1.634074] kauditd_printk_skb: 46 callbacks suppressed
	[ +15.121467] kauditd_printk_skb: 41 callbacks suppressed
	[Aug29 18:55] kauditd_printk_skb: 12 callbacks suppressed
	[Aug29 19:01] systemd-fstab-generator[2639]: Ignoring "noauto" option for root device
	[  +0.156212] systemd-fstab-generator[2652]: Ignoring "noauto" option for root device
	[  +0.168713] systemd-fstab-generator[2666]: Ignoring "noauto" option for root device
	[  +0.137089] systemd-fstab-generator[2678]: Ignoring "noauto" option for root device
	[  +0.268022] systemd-fstab-generator[2706]: Ignoring "noauto" option for root device
	[  +6.537749] systemd-fstab-generator[2802]: Ignoring "noauto" option for root device
	[  +0.091561] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.049287] systemd-fstab-generator[2923]: Ignoring "noauto" option for root device
	[  +4.650330] kauditd_printk_skb: 74 callbacks suppressed
	[ +16.272570] systemd-fstab-generator[3772]: Ignoring "noauto" option for root device
	[  +0.092373] kauditd_printk_skb: 36 callbacks suppressed
	[ +17.409124] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [4a17f1421a093ba355755b7406c8928d6f6441ac020909642bdfdd6cf5dc0cc2] <==
	{"level":"info","ts":"2024-08-29T18:55:29.155348Z","caller":"traceutil/trace.go:171","msg":"trace[748270603] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"221.990226ms","start":"2024-08-29T18:55:28.933348Z","end":"2024-08-29T18:55:29.155339Z","steps":["trace[748270603] 'process raft request'  (duration: 215.112205ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:55:29.155507Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.914224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-922931-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:55:29.155562Z","caller":"traceutil/trace.go:171","msg":"trace[246129906] range","detail":"{range_begin:/registry/minions/multinode-922931-m02; range_end:; response_count:0; response_revision:477; }","duration":"147.975071ms","start":"2024-08-29T18:55:29.007578Z","end":"2024-08-29T18:55:29.155553Z","steps":["trace[246129906] 'agreement among raft nodes before linearized reading'  (duration: 147.862084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:56:23.946964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.431227ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10610359361295044568 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-922931-m03.17f047f70d85ee72\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-922931-m03.17f047f70d85ee72\" value_size:642 lease:1386987324440268341 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-29T18:56:23.947369Z","caller":"traceutil/trace.go:171","msg":"trace[1451279462] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"227.333678ms","start":"2024-08-29T18:56:23.720008Z","end":"2024-08-29T18:56:23.947341Z","steps":["trace[1451279462] 'process raft request'  (duration: 71.342509ms)","trace[1451279462] 'compare'  (duration: 155.289496ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:56:27.776049Z","caller":"traceutil/trace.go:171","msg":"trace[131885823] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"126.250212ms","start":"2024-08-29T18:56:27.649785Z","end":"2024-08-29T18:56:27.776036Z","steps":["trace[131885823] 'process raft request'  (duration: 126.157685ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:56:27.840296Z","caller":"traceutil/trace.go:171","msg":"trace[810940815] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"187.691507ms","start":"2024-08-29T18:56:27.652591Z","end":"2024-08-29T18:56:27.840283Z","steps":["trace[810940815] 'process raft request'  (duration: 186.946255ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:56:34.478437Z","caller":"traceutil/trace.go:171","msg":"trace[599162539] linearizableReadLoop","detail":"{readStateIndex:696; appliedIndex:695; }","duration":"324.630655ms","start":"2024-08-29T18:56:34.153795Z","end":"2024-08-29T18:56:34.478425Z","steps":["trace[599162539] 'read index received'  (duration: 324.36994ms)","trace[599162539] 'applied index is now lower than readState.Index'  (duration: 260.2µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:56:34.478679Z","caller":"traceutil/trace.go:171","msg":"trace[1236827111] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"353.056846ms","start":"2024-08-29T18:56:34.125614Z","end":"2024-08-29T18:56:34.478671Z","steps":["trace[1236827111] 'process raft request'  (duration: 352.61231ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:56:34.479116Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:56:34.125594Z","time spent":"353.118083ms","remote":"127.0.0.1:36040","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3173,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-922931-m03\" mod_revision:637 > success:<request_put:<key:\"/registry/minions/multinode-922931-m03\" value_size:3127 >> failure:<request_range:<key:\"/registry/minions/multinode-922931-m03\" > >"}
	{"level":"warn","ts":"2024-08-29T18:56:34.479313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.394829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.171\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-08-29T18:56:34.481803Z","caller":"traceutil/trace.go:171","msg":"trace[1199810598] range","detail":"{range_begin:/registry/masterleases/192.168.39.171; range_end:; response_count:1; response_revision:658; }","duration":"257.885301ms","start":"2024-08-29T18:56:34.223904Z","end":"2024-08-29T18:56:34.481789Z","steps":["trace[1199810598] 'agreement among raft nodes before linearized reading'  (duration: 255.318841ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:56:34.479369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"325.569017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/multinode-922931-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:56:34.482017Z","caller":"traceutil/trace.go:171","msg":"trace[391663869] range","detail":"{range_begin:/registry/leases/kube-node-lease/multinode-922931-m03; range_end:; response_count:0; response_revision:658; }","duration":"328.218803ms","start":"2024-08-29T18:56:34.153790Z","end":"2024-08-29T18:56:34.482009Z","steps":["trace[391663869] 'agreement among raft nodes before linearized reading'  (duration: 325.555982ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:56:34.482060Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T18:56:34.153758Z","time spent":"328.288766ms","remote":"127.0.0.1:36116","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":0,"response size":28,"request content":"key:\"/registry/leases/kube-node-lease/multinode-922931-m03\" "}
	{"level":"info","ts":"2024-08-29T18:59:37.971045Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-29T18:59:37.971195Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-922931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"]}
	{"level":"warn","ts":"2024-08-29T18:59:37.971301Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T18:59:37.971395Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T18:59:38.054619Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.171:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T18:59:38.054735Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.171:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-29T18:59:38.056307Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4e6b9cdcc1ed933f","current-leader-member-id":"4e6b9cdcc1ed933f"}
	{"level":"info","ts":"2024-08-29T18:59:38.059253Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-08-29T18:59:38.059385Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-08-29T18:59:38.059405Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-922931","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"]}
	
	
	==> etcd [99afe537efacfbfbd23e4a327a7839185834c590327290ee8f584b802064b857] <==
	{"level":"info","ts":"2024-08-29T19:01:19.901310Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","added-peer-id":"4e6b9cdcc1ed933f","added-peer-peer-urls":["https://192.168.39.171:2380"]}
	{"level":"info","ts":"2024-08-29T19:01:19.901459Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c9ee22fca1de3e71","local-member-id":"4e6b9cdcc1ed933f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:01:19.901483Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:01:19.905542Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:01:19.909568Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T19:01:19.909824Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-08-29T19:01:19.909852Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.171:2380"}
	{"level":"info","ts":"2024-08-29T19:01:19.913637Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4e6b9cdcc1ed933f","initial-advertise-peer-urls":["https://192.168.39.171:2380"],"listen-peer-urls":["https://192.168.39.171:2380"],"advertise-client-urls":["https://192.168.39.171:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.171:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:01:19.913695Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:01:21.380478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-29T19:01:21.380625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-29T19:01:21.380678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgPreVoteResp from 4e6b9cdcc1ed933f at term 2"}
	{"level":"info","ts":"2024-08-29T19:01:21.380716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became candidate at term 3"}
	{"level":"info","ts":"2024-08-29T19:01:21.380744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f received MsgVoteResp from 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-08-29T19:01:21.380771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e6b9cdcc1ed933f became leader at term 3"}
	{"level":"info","ts":"2024-08-29T19:01:21.380797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e6b9cdcc1ed933f elected leader 4e6b9cdcc1ed933f at term 3"}
	{"level":"info","ts":"2024-08-29T19:01:21.387484Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:01:21.387451Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4e6b9cdcc1ed933f","local-member-attributes":"{Name:multinode-922931 ClientURLs:[https://192.168.39.171:2379]}","request-path":"/0/members/4e6b9cdcc1ed933f/attributes","cluster-id":"c9ee22fca1de3e71","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:01:21.387740Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:01:21.388731Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:01:21.389344Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:01:21.389518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.171:2379"}
	{"level":"info","ts":"2024-08-29T19:01:21.390714Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T19:01:21.391139Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:01:21.391182Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:05:27 up 11 min,  0 users,  load average: 0.13, 0.21, 0.14
	Linux multinode-922931 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [04ed982a9d24605accad6756d48a0dfdb864203c903617629c9112f311a008ba] <==
	I0829 18:58:56.348283       1 main.go:299] handling current node
	I0829 18:59:06.342773       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 18:59:06.342830       1 main.go:299] handling current node
	I0829 18:59:06.342849       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 18:59:06.342856       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 18:59:06.343003       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0829 18:59:06.343020       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.3.0/24] 
	I0829 18:59:16.342929       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 18:59:16.342985       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 18:59:16.343164       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0829 18:59:16.343186       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.3.0/24] 
	I0829 18:59:16.343242       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 18:59:16.343258       1 main.go:299] handling current node
	I0829 18:59:26.343035       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0829 18:59:26.343234       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.3.0/24] 
	I0829 18:59:26.343409       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 18:59:26.343441       1 main.go:299] handling current node
	I0829 18:59:26.343471       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 18:59:26.343488       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 18:59:36.343505       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 18:59:36.343570       1 main.go:299] handling current node
	I0829 18:59:36.343594       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 18:59:36.343600       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 18:59:36.343765       1 main.go:295] Handling node with IPs: map[192.168.39.226:{}]
	I0829 18:59:36.343787       1 main.go:322] Node multinode-922931-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [51cf7b4a2b4277b833fddb33fd5bca910084eabbfd9fab545d3564e743702116] <==
	I0829 19:04:24.445020       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 19:04:34.444949       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 19:04:34.445137       1 main.go:299] handling current node
	I0829 19:04:34.445175       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 19:04:34.445197       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 19:04:44.453975       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 19:04:44.454049       1 main.go:299] handling current node
	I0829 19:04:44.454100       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 19:04:44.454107       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 19:04:54.444842       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 19:04:54.445022       1 main.go:299] handling current node
	I0829 19:04:54.445060       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 19:04:54.445139       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 19:05:04.451796       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 19:05:04.451903       1 main.go:299] handling current node
	I0829 19:05:04.451942       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 19:05:04.451961       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 19:05:14.446913       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 19:05:14.446967       1 main.go:299] handling current node
	I0829 19:05:14.446982       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 19:05:14.446987       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	I0829 19:05:24.445668       1 main.go:295] Handling node with IPs: map[192.168.39.171:{}]
	I0829 19:05:24.445788       1 main.go:299] handling current node
	I0829 19:05:24.445816       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0829 19:05:24.445834       1 main.go:322] Node multinode-922931-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [7867424ad4b0400220c03410b4fdd04b6adf9bfc76e89accb5670500f120a547] <==
	I0829 18:54:35.920224       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0829 18:54:35.934780       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 18:54:40.353681       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0829 18:54:40.493775       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0829 18:55:57.486579       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49480: use of closed network connection
	E0829 18:55:57.650906       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49512: use of closed network connection
	E0829 18:55:57.818967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49540: use of closed network connection
	E0829 18:55:57.983342       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49554: use of closed network connection
	E0829 18:55:58.146697       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49578: use of closed network connection
	E0829 18:55:58.317382       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49586: use of closed network connection
	E0829 18:55:58.582727       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49618: use of closed network connection
	E0829 18:55:58.742340       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49624: use of closed network connection
	E0829 18:55:58.901851       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49656: use of closed network connection
	E0829 18:55:59.080389       1 conn.go:339] Error on socket receive: read tcp 192.168.39.171:8443->192.168.39.1:49670: use of closed network connection
	I0829 18:59:37.970515       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0829 18:59:37.997884       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:37.997943       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:37.997985       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:37.998024       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:37.998119       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:37.998163       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:37.998255       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:38.000190       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:38.000251       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 18:59:38.000299       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bf6a4434171610963f1f43e580883fe71e0d47f058737130fe8ae970e0cf41e6] <==
	I0829 19:01:22.697789       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0829 19:01:22.707187       1 aggregator.go:171] initial CRD sync complete...
	I0829 19:01:22.709132       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 19:01:22.709212       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 19:01:22.709238       1 cache.go:39] Caches are synced for autoregister controller
	I0829 19:01:22.718996       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 19:01:22.719098       1 policy_source.go:224] refreshing policies
	I0829 19:01:22.762450       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 19:01:22.762561       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0829 19:01:22.764898       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0829 19:01:22.765017       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 19:01:22.765042       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 19:01:22.765189       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0829 19:01:22.766059       1 shared_informer.go:320] Caches are synced for configmaps
	I0829 19:01:22.768922       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0829 19:01:22.773217       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0829 19:01:22.774972       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0829 19:01:23.568736       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0829 19:01:24.399020       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 19:01:24.530637       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 19:01:24.544969       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 19:01:24.625303       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0829 19:01:24.631980       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0829 19:01:26.369649       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 19:01:26.418018       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [03ed977ad4a1d1d7530baf3c6e9e0da7cf3f3a3bdeb1373407a75ec6e004037e] <==
	I0829 18:57:11.275680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 18:57:11.275954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:12.274397       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-922931-m03\" does not exist"
	I0829 18:57:12.275407       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 18:57:12.292933       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-922931-m03" podCIDRs=["10.244.3.0/24"]
	I0829 18:57:12.293039       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:12.293123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:12.301483       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:12.310525       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:12.652807       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:14.846176       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:22.445460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:32.655439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:32.655666       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 18:57:32.662784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:57:34.803652       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:58:14.821257       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:58:14.825719       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 18:58:14.828499       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m02"
	I0829 18:58:14.855305       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m02"
	I0829 18:58:14.862438       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 18:58:14.890512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.818726ms"
	I0829 18:58:14.890596       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.694µs"
	I0829 18:58:19.981289       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m02"
	I0829 18:58:30.072646       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	
	
	==> kube-controller-manager [cd16207ad3b78365f5336ee8ac71b51672477a007a026be84553bb048a74af5e] <==
	I0829 19:02:42.911313       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-922931-m03" podCIDRs=["10.244.2.0/24"]
	I0829 19:02:42.911940       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:42.912294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:42.922923       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:43.348737       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:43.661893       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:46.244901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:02:53.310442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:03:01.125026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:03:01.125637       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 19:03:01.136604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:03:01.163776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:03:05.669757       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:03:05.690734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:03:06.116855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m03"
	I0829 19:03:06.117055       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922931-m02"
	I0829 19:03:46.182029       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m02"
	I0829 19:03:46.201191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m02"
	I0829 19:03:46.211030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.391358ms"
	I0829 19:03:46.211152       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.179µs"
	I0829 19:03:51.266493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922931-m02"
	I0829 19:04:06.096287       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fjbnr"
	I0829 19:04:06.134125       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fjbnr"
	I0829 19:04:06.134286       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-z7svl"
	I0829 19:04:06.174441       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-z7svl"
	
	
	==> kube-proxy [b8aee643ba50193e37851993c8871e41a13c5fab876e4ca1ea3c17bf44c3ef94] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:01:23.639245       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:01:23.664737       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.171"]
	E0829 19:01:23.664840       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:01:23.725210       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:01:23.725314       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:01:23.725505       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:01:23.728620       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:01:23.729291       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:01:23.729377       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:01:23.730963       1 config.go:197] "Starting service config controller"
	I0829 19:01:23.732177       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:01:23.732766       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:01:23.732857       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:01:23.733903       1 config.go:326] "Starting node config controller"
	I0829 19:01:23.735150       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:01:23.833930       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:01:23.834049       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:01:23.835709       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f0c82b2494ec0493892f1049526fce1381fa6d74a08fcb9df5d5f226b99bf626] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:54:42.600139       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:54:42.608731       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.171"]
	E0829 18:54:42.608803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:54:42.639303       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:54:42.639383       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:54:42.639409       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:54:42.641513       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:54:42.641808       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:54:42.641830       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:54:42.645464       1 config.go:197] "Starting service config controller"
	I0829 18:54:42.645575       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:54:42.645835       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:54:42.645875       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:54:42.646659       1 config.go:326] "Starting node config controller"
	I0829 18:54:42.646702       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:54:42.745902       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:54:42.745918       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 18:54:42.747364       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [08466cf1de50ca3a78f8fc03c118b634b73459a82a36469ee77808d8b83164ad] <==
	I0829 19:01:20.716250       1 serving.go:386] Generated self-signed cert in-memory
	W0829 19:01:22.642998       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 19:01:22.643207       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 19:01:22.643293       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 19:01:22.643324       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 19:01:22.706035       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 19:01:22.706156       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:01:22.713748       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 19:01:22.713883       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 19:01:22.714619       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 19:01:22.716144       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 19:01:22.814774       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [629bd4d21adaa6a9870fc1361b3d9b2dd0d43fd37de83c36b97a3bb87199a632] <==
	E0829 18:54:32.533899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.431482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:54:33.432163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.492861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:54:33.492914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.569663       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:54:33.570890       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0829 18:54:33.620649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 18:54:33.620746       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.623134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:54:33.623228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.781391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:54:33.781517       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.786376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:54:33.786476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.801060       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:54:33.801205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.801235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:54:33.803059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.842805       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:54:33.842855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:54:33.852573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:54:33.852654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0829 18:54:36.517979       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0829 18:59:37.977939       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 29 19:04:08 multinode-922931 kubelet[2930]: E0829 19:04:08.912988    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958248911855942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:04:18 multinode-922931 kubelet[2930]: E0829 19:04:18.875956    2930 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:04:18 multinode-922931 kubelet[2930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:04:18 multinode-922931 kubelet[2930]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:04:18 multinode-922931 kubelet[2930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:04:18 multinode-922931 kubelet[2930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:04:18 multinode-922931 kubelet[2930]: E0829 19:04:18.914303    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958258913977287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:04:18 multinode-922931 kubelet[2930]: E0829 19:04:18.914326    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958258913977287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:04:28 multinode-922931 kubelet[2930]: E0829 19:04:28.917343    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958268916727931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:04:28 multinode-922931 kubelet[2930]: E0829 19:04:28.917642    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958268916727931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:04:38 multinode-922931 kubelet[2930]: E0829 19:04:38.919568    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958278919113516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:04:38 multinode-922931 kubelet[2930]: E0829 19:04:38.919606    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958278919113516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:04:48 multinode-922931 kubelet[2930]: E0829 19:04:48.921013    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958288920607881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:04:48 multinode-922931 kubelet[2930]: E0829 19:04:48.921055    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958288920607881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:04:58 multinode-922931 kubelet[2930]: E0829 19:04:58.922677    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958298922433942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:04:58 multinode-922931 kubelet[2930]: E0829 19:04:58.922717    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958298922433942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:05:08 multinode-922931 kubelet[2930]: E0829 19:05:08.924248    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958308923902656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:05:08 multinode-922931 kubelet[2930]: E0829 19:05:08.924293    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958308923902656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:05:18 multinode-922931 kubelet[2930]: E0829 19:05:18.874907    2930 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:05:18 multinode-922931 kubelet[2930]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:05:18 multinode-922931 kubelet[2930]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:05:18 multinode-922931 kubelet[2930]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:05:18 multinode-922931 kubelet[2930]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:05:18 multinode-922931 kubelet[2930]: E0829 19:05:18.926184    2930 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958318925883394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:05:18 multinode-922931 kubelet[2930]: E0829 19:05:18.926220    2930 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724958318925883394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:05:27.075524   52026 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19531-13056/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-922931 -n multinode-922931
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-922931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.28s)

                                                
                                    
x
+
TestPreload (270.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-818083 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0829 19:09:49.633511   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-818083 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m7.690345622s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-818083 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-818083 image pull gcr.io/k8s-minikube/busybox: (3.348868828s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-818083
E0829 19:13:09.774340   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-818083: exit status 82 (2m0.459793693s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-818083"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-818083 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-29 19:13:26.376194421 +0000 UTC m=+4079.096987201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-818083 -n test-preload-818083
E0829 19:13:26.705909   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-818083 -n test-preload-818083: exit status 3 (18.553532213s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:13:44.926505   54891 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.122:22: connect: no route to host
	E0829 19:13:44.926528   54891 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.122:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-818083" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-818083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-818083
--- FAIL: TestPreload (270.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (430.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-353455 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-353455 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m49.818530663s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-353455] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-353455" primary control-plane node in "kubernetes-upgrade-353455" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:15:40.272855   55990 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:15:40.272992   55990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:15:40.273004   55990 out.go:358] Setting ErrFile to fd 2...
	I0829 19:15:40.273011   55990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:15:40.273212   55990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:15:40.274067   55990 out.go:352] Setting JSON to false
	I0829 19:15:40.275281   55990 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7087,"bootTime":1724951853,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:15:40.275392   55990 start.go:139] virtualization: kvm guest
	I0829 19:15:40.277874   55990 out.go:177] * [kubernetes-upgrade-353455] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:15:40.279259   55990 notify.go:220] Checking for updates...
	I0829 19:15:40.280428   55990 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:15:40.281995   55990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:15:40.285195   55990 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:15:40.287471   55990 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:15:40.288772   55990 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:15:40.291664   55990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:15:40.293297   55990 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:15:40.333729   55990 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 19:15:40.334913   55990 start.go:297] selected driver: kvm2
	I0829 19:15:40.334924   55990 start.go:901] validating driver "kvm2" against <nil>
	I0829 19:15:40.334937   55990 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:15:40.335666   55990 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:15:40.357932   55990 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:15:40.373934   55990 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:15:40.374008   55990 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 19:15:40.374310   55990 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 19:15:40.374392   55990 cni.go:84] Creating CNI manager for ""
	I0829 19:15:40.374409   55990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:15:40.374420   55990 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 19:15:40.374500   55990 start.go:340] cluster config:
	{Name:kubernetes-upgrade-353455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:15:40.374640   55990 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:15:40.376344   55990 out.go:177] * Starting "kubernetes-upgrade-353455" primary control-plane node in "kubernetes-upgrade-353455" cluster
	I0829 19:15:40.377547   55990 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:15:40.377587   55990 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:15:40.377608   55990 cache.go:56] Caching tarball of preloaded images
	I0829 19:15:40.377714   55990 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:15:40.377729   55990 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0829 19:15:40.378135   55990 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/config.json ...
	I0829 19:15:40.378166   55990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/config.json: {Name:mk100d589cd65c8671467f99a2b3558f752d94f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:15:40.378333   55990 start.go:360] acquireMachinesLock for kubernetes-upgrade-353455: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:16:02.362986   55990 start.go:364] duration metric: took 21.984622473s to acquireMachinesLock for "kubernetes-upgrade-353455"
	I0829 19:16:02.363057   55990 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-353455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:16:02.363184   55990 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 19:16:02.365133   55990 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 19:16:02.365386   55990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:16:02.365438   55990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:16:02.382348   55990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0829 19:16:02.382775   55990 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:16:02.383354   55990 main.go:141] libmachine: Using API Version  1
	I0829 19:16:02.383382   55990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:16:02.383724   55990 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:16:02.383893   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetMachineName
	I0829 19:16:02.384063   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:16:02.384218   55990 start.go:159] libmachine.API.Create for "kubernetes-upgrade-353455" (driver="kvm2")
	I0829 19:16:02.384247   55990 client.go:168] LocalClient.Create starting
	I0829 19:16:02.384280   55990 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem
	I0829 19:16:02.384310   55990 main.go:141] libmachine: Decoding PEM data...
	I0829 19:16:02.384335   55990 main.go:141] libmachine: Parsing certificate...
	I0829 19:16:02.384387   55990 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem
	I0829 19:16:02.384406   55990 main.go:141] libmachine: Decoding PEM data...
	I0829 19:16:02.384420   55990 main.go:141] libmachine: Parsing certificate...
	I0829 19:16:02.384435   55990 main.go:141] libmachine: Running pre-create checks...
	I0829 19:16:02.384447   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .PreCreateCheck
	I0829 19:16:02.384784   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetConfigRaw
	I0829 19:16:02.385136   55990 main.go:141] libmachine: Creating machine...
	I0829 19:16:02.385154   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .Create
	I0829 19:16:02.385278   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Creating KVM machine...
	I0829 19:16:02.386371   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found existing default KVM network
	I0829 19:16:02.387351   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:02.387186   56289 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:02:c0:c9} reservation:<nil>}
	I0829 19:16:02.388183   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:02.388096   56289 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I0829 19:16:02.388216   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | created network xml: 
	I0829 19:16:02.388236   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | <network>
	I0829 19:16:02.388251   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG |   <name>mk-kubernetes-upgrade-353455</name>
	I0829 19:16:02.388260   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG |   <dns enable='no'/>
	I0829 19:16:02.388273   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG |   
	I0829 19:16:02.388284   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0829 19:16:02.388301   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG |     <dhcp>
	I0829 19:16:02.388315   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0829 19:16:02.388337   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG |     </dhcp>
	I0829 19:16:02.388352   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG |   </ip>
	I0829 19:16:02.388370   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG |   
	I0829 19:16:02.388381   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | </network>
	I0829 19:16:02.388395   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | 
	I0829 19:16:02.393920   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | trying to create private KVM network mk-kubernetes-upgrade-353455 192.168.50.0/24...
	I0829 19:16:02.464532   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | private KVM network mk-kubernetes-upgrade-353455 192.168.50.0/24 created
	I0829 19:16:02.464578   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:02.464487   56289 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:16:02.464600   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Setting up store path in /home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455 ...
	I0829 19:16:02.464613   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Building disk image from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 19:16:02.464633   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Downloading /home/jenkins/minikube-integration/19531-13056/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 19:16:02.718696   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:02.718541   56289 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa...
	I0829 19:16:02.795934   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:02.795796   56289 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/kubernetes-upgrade-353455.rawdisk...
	I0829 19:16:02.795972   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Writing magic tar header
	I0829 19:16:02.795988   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Writing SSH key tar header
	I0829 19:16:02.796000   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:02.795946   56289 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455 ...
	I0829 19:16:02.796078   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455
	I0829 19:16:02.796102   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455 (perms=drwx------)
	I0829 19:16:02.796115   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines
	I0829 19:16:02.796135   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:16:02.796151   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056
	I0829 19:16:02.796168   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 19:16:02.796195   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Checking permissions on dir: /home/jenkins
	I0829 19:16:02.796212   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines (perms=drwxr-xr-x)
	I0829 19:16:02.796234   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube (perms=drwxr-xr-x)
	I0829 19:16:02.796246   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056 (perms=drwxrwxr-x)
	I0829 19:16:02.796262   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 19:16:02.796271   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 19:16:02.796303   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Checking permissions on dir: /home
	I0829 19:16:02.796325   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Creating domain...
	I0829 19:16:02.796338   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Skipping /home - not owner
	I0829 19:16:02.797364   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) define libvirt domain using xml: 
	I0829 19:16:02.797374   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) <domain type='kvm'>
	I0829 19:16:02.797383   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)   <name>kubernetes-upgrade-353455</name>
	I0829 19:16:02.797391   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)   <memory unit='MiB'>2200</memory>
	I0829 19:16:02.797418   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)   <vcpu>2</vcpu>
	I0829 19:16:02.797428   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)   <features>
	I0829 19:16:02.797434   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <acpi/>
	I0829 19:16:02.797441   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <apic/>
	I0829 19:16:02.797467   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <pae/>
	I0829 19:16:02.797490   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     
	I0829 19:16:02.797500   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)   </features>
	I0829 19:16:02.797513   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)   <cpu mode='host-passthrough'>
	I0829 19:16:02.797524   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)   
	I0829 19:16:02.797534   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)   </cpu>
	I0829 19:16:02.797547   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)   <os>
	I0829 19:16:02.797557   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <type>hvm</type>
	I0829 19:16:02.797569   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <boot dev='cdrom'/>
	I0829 19:16:02.797580   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <boot dev='hd'/>
	I0829 19:16:02.797590   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <bootmenu enable='no'/>
	I0829 19:16:02.797601   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)   </os>
	I0829 19:16:02.797609   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)   <devices>
	I0829 19:16:02.797619   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <disk type='file' device='cdrom'>
	I0829 19:16:02.797637   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/boot2docker.iso'/>
	I0829 19:16:02.797663   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <target dev='hdc' bus='scsi'/>
	I0829 19:16:02.797680   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <readonly/>
	I0829 19:16:02.797695   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     </disk>
	I0829 19:16:02.797706   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <disk type='file' device='disk'>
	I0829 19:16:02.797720   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 19:16:02.797735   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/kubernetes-upgrade-353455.rawdisk'/>
	I0829 19:16:02.797760   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <target dev='hda' bus='virtio'/>
	I0829 19:16:02.797776   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     </disk>
	I0829 19:16:02.797789   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <interface type='network'>
	I0829 19:16:02.797800   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <source network='mk-kubernetes-upgrade-353455'/>
	I0829 19:16:02.797817   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <model type='virtio'/>
	I0829 19:16:02.797826   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     </interface>
	I0829 19:16:02.797832   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <interface type='network'>
	I0829 19:16:02.797837   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <source network='default'/>
	I0829 19:16:02.797843   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <model type='virtio'/>
	I0829 19:16:02.797847   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     </interface>
	I0829 19:16:02.797857   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <serial type='pty'>
	I0829 19:16:02.797867   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <target port='0'/>
	I0829 19:16:02.797872   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     </serial>
	I0829 19:16:02.797877   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <console type='pty'>
	I0829 19:16:02.797884   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <target type='serial' port='0'/>
	I0829 19:16:02.797890   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     </console>
	I0829 19:16:02.797896   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     <rng model='virtio'>
	I0829 19:16:02.797908   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)       <backend model='random'>/dev/random</backend>
	I0829 19:16:02.797928   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     </rng>
	I0829 19:16:02.797947   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     
	I0829 19:16:02.797959   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)     
	I0829 19:16:02.797975   55990 main.go:141] libmachine: (kubernetes-upgrade-353455)   </devices>
	I0829 19:16:02.797986   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) </domain>
	I0829 19:16:02.797994   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) 
	I0829 19:16:02.802787   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:74:8c:0e in network default
	I0829 19:16:02.803361   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Ensuring networks are active...
	I0829 19:16:02.803385   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:02.804058   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Ensuring network default is active
	I0829 19:16:02.804429   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Ensuring network mk-kubernetes-upgrade-353455 is active
	I0829 19:16:02.804871   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Getting domain xml...
	I0829 19:16:02.805522   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Creating domain...
	I0829 19:16:04.126817   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Waiting to get IP...
	I0829 19:16:04.127802   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:04.128142   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:04.128188   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:04.128129   56289 retry.go:31] will retry after 254.601019ms: waiting for machine to come up
	I0829 19:16:04.385126   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:04.385710   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:04.385739   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:04.385667   56289 retry.go:31] will retry after 291.021827ms: waiting for machine to come up
	I0829 19:16:04.678249   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:04.678705   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:04.678750   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:04.678683   56289 retry.go:31] will retry after 464.711035ms: waiting for machine to come up
	I0829 19:16:05.145425   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:05.145877   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:05.145904   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:05.145849   56289 retry.go:31] will retry after 578.862831ms: waiting for machine to come up
	I0829 19:16:05.726413   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:05.726890   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:05.726918   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:05.726848   56289 retry.go:31] will retry after 619.67851ms: waiting for machine to come up
	I0829 19:16:06.348242   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:06.348700   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:06.348729   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:06.348654   56289 retry.go:31] will retry after 669.745597ms: waiting for machine to come up
	I0829 19:16:07.020501   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:07.021020   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:07.021051   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:07.020966   56289 retry.go:31] will retry after 781.112205ms: waiting for machine to come up
	I0829 19:16:07.803302   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:07.803694   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:07.803720   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:07.803659   56289 retry.go:31] will retry after 1.473353445s: waiting for machine to come up
	I0829 19:16:09.278118   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:09.278569   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:09.278601   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:09.278513   56289 retry.go:31] will retry after 1.649377495s: waiting for machine to come up
	I0829 19:16:10.929109   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:10.929514   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:10.929536   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:10.929464   56289 retry.go:31] will retry after 1.652524043s: waiting for machine to come up
	I0829 19:16:12.584114   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:12.584649   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:12.584678   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:12.584598   56289 retry.go:31] will retry after 2.012189967s: waiting for machine to come up
	I0829 19:16:14.600131   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:14.600547   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:14.600570   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:14.600522   56289 retry.go:31] will retry after 2.589379607s: waiting for machine to come up
	I0829 19:16:17.191081   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:17.191547   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:17.191573   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:17.191512   56289 retry.go:31] will retry after 3.099526664s: waiting for machine to come up
	I0829 19:16:20.296186   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:20.296671   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find current IP address of domain kubernetes-upgrade-353455 in network mk-kubernetes-upgrade-353455
	I0829 19:16:20.296696   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | I0829 19:16:20.296628   56289 retry.go:31] will retry after 3.801900508s: waiting for machine to come up
	I0829 19:16:24.099671   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.100114   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Found IP for machine: 192.168.50.102
	I0829 19:16:24.100140   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Reserving static IP address...
	I0829 19:16:24.100156   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has current primary IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.100524   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-353455", mac: "52:54:00:13:18:17", ip: "192.168.50.102"} in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.264075   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Getting to WaitForSSH function...
	I0829 19:16:24.264106   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Reserved static IP address: 192.168.50.102
	I0829 19:16:24.264125   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Waiting for SSH to be available...
	I0829 19:16:24.267272   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.267605   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:13:18:17}
	I0829 19:16:24.267631   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.267778   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Using SSH client type: external
	I0829 19:16:24.267818   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa (-rw-------)
	I0829 19:16:24.267865   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:16:24.267880   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | About to run SSH command:
	I0829 19:16:24.267915   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | exit 0
	I0829 19:16:24.410200   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | SSH cmd err, output: <nil>: 
	I0829 19:16:24.410475   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) KVM machine creation complete!
	I0829 19:16:24.410859   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetConfigRaw
	I0829 19:16:24.422431   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:16:24.422738   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:16:24.422959   55990 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 19:16:24.422994   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetState
	I0829 19:16:24.424403   55990 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 19:16:24.424423   55990 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 19:16:24.424430   55990 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 19:16:24.424439   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:16:24.426904   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.427223   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:13:18:17}
	I0829 19:16:24.427247   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.427412   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:16:24.427593   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:24.427730   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:24.427929   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:16:24.428089   55990 main.go:141] libmachine: Using SSH client type: native
	I0829 19:16:24.428294   55990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0829 19:16:24.428307   55990 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 19:16:24.529339   55990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:16:24.529379   55990 main.go:141] libmachine: Detecting the provisioner...
	I0829 19:16:24.529393   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:16:24.532261   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.532676   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:24.532706   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.532829   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:16:24.533014   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:24.533179   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:24.533333   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:16:24.533502   55990 main.go:141] libmachine: Using SSH client type: native
	I0829 19:16:24.533721   55990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0829 19:16:24.533734   55990 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 19:16:24.636020   55990 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 19:16:24.636123   55990 main.go:141] libmachine: found compatible host: buildroot
	I0829 19:16:24.636138   55990 main.go:141] libmachine: Provisioning with buildroot...
	I0829 19:16:24.636154   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetMachineName
	I0829 19:16:24.636404   55990 buildroot.go:166] provisioning hostname "kubernetes-upgrade-353455"
	I0829 19:16:24.636432   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetMachineName
	I0829 19:16:24.636621   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:16:24.639517   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.639952   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:24.639987   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.640137   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:16:24.640325   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:24.640496   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:24.640618   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:16:24.640762   55990 main.go:141] libmachine: Using SSH client type: native
	I0829 19:16:24.640929   55990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0829 19:16:24.640940   55990 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-353455 && echo "kubernetes-upgrade-353455" | sudo tee /etc/hostname
	I0829 19:16:24.752636   55990 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-353455
	
	I0829 19:16:24.752659   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:16:24.755655   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.756097   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:24.756126   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.756345   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:16:24.756554   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:24.756822   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:24.756962   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:16:24.757154   55990 main.go:141] libmachine: Using SSH client type: native
	I0829 19:16:24.757354   55990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0829 19:16:24.757373   55990 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-353455' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-353455/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-353455' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:16:24.862505   55990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:16:24.862542   55990 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:16:24.862566   55990 buildroot.go:174] setting up certificates
	I0829 19:16:24.862583   55990 provision.go:84] configureAuth start
	I0829 19:16:24.862596   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetMachineName
	I0829 19:16:24.862933   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetIP
	I0829 19:16:24.865371   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.865726   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:24.865763   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.865959   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:16:24.868210   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.868546   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:24.868574   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.868678   55990 provision.go:143] copyHostCerts
	I0829 19:16:24.868743   55990 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:16:24.868759   55990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:16:24.868809   55990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:16:24.868904   55990 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:16:24.868912   55990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:16:24.868930   55990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:16:24.869027   55990 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:16:24.869036   55990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:16:24.869054   55990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:16:24.869112   55990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-353455 san=[127.0.0.1 192.168.50.102 kubernetes-upgrade-353455 localhost minikube]
	I0829 19:16:24.970112   55990 provision.go:177] copyRemoteCerts
	I0829 19:16:24.970176   55990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:16:24.970199   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:16:24.973106   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.973432   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:24.973463   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:24.973674   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:16:24.973894   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:24.974034   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:16:24.974171   55990 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:16:25.051971   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:16:25.074916   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:16:25.098150   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0829 19:16:25.128873   55990 provision.go:87] duration metric: took 266.275879ms to configureAuth
	I0829 19:16:25.128920   55990 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:16:25.129108   55990 config.go:182] Loaded profile config "kubernetes-upgrade-353455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:16:25.129187   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:16:25.132215   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.132606   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:25.132627   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.132820   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:16:25.133017   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:25.133254   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:25.133400   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:16:25.133622   55990 main.go:141] libmachine: Using SSH client type: native
	I0829 19:16:25.133816   55990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0829 19:16:25.133830   55990 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:16:25.443113   55990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:16:25.443139   55990 main.go:141] libmachine: Checking connection to Docker...
	I0829 19:16:25.443148   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetURL
	I0829 19:16:25.444339   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | Using libvirt version 6000000
	I0829 19:16:25.446904   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.447240   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:25.447261   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.447449   55990 main.go:141] libmachine: Docker is up and running!
	I0829 19:16:25.447461   55990 main.go:141] libmachine: Reticulating splines...
	I0829 19:16:25.447467   55990 client.go:171] duration metric: took 23.063211057s to LocalClient.Create
	I0829 19:16:25.447490   55990 start.go:167] duration metric: took 23.063274474s to libmachine.API.Create "kubernetes-upgrade-353455"
	I0829 19:16:25.447499   55990 start.go:293] postStartSetup for "kubernetes-upgrade-353455" (driver="kvm2")
	I0829 19:16:25.447509   55990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:16:25.447525   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:16:25.447742   55990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:16:25.447767   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:16:25.450013   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.450336   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:25.450361   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.450524   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:16:25.450689   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:25.450854   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:16:25.450999   55990 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:16:25.528403   55990 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:16:25.532853   55990 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:16:25.532882   55990 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:16:25.532951   55990 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:16:25.533046   55990 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:16:25.533156   55990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:16:25.542915   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:16:25.565680   55990 start.go:296] duration metric: took 118.167778ms for postStartSetup
	I0829 19:16:25.565737   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetConfigRaw
	I0829 19:16:25.566326   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetIP
	I0829 19:16:25.569532   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.569947   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:25.569977   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.570257   55990 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/config.json ...
	I0829 19:16:25.570509   55990 start.go:128] duration metric: took 23.207311204s to createHost
	I0829 19:16:25.570544   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:16:25.572993   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.573460   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:25.573487   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.573634   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:16:25.573821   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:25.573970   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:25.574143   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:16:25.574304   55990 main.go:141] libmachine: Using SSH client type: native
	I0829 19:16:25.574506   55990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0829 19:16:25.574519   55990 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:16:25.674526   55990 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724958985.645966353
	
	I0829 19:16:25.674554   55990 fix.go:216] guest clock: 1724958985.645966353
	I0829 19:16:25.674565   55990 fix.go:229] Guest: 2024-08-29 19:16:25.645966353 +0000 UTC Remote: 2024-08-29 19:16:25.57052949 +0000 UTC m=+45.336746294 (delta=75.436863ms)
	I0829 19:16:25.674592   55990 fix.go:200] guest clock delta is within tolerance: 75.436863ms
	I0829 19:16:25.674599   55990 start.go:83] releasing machines lock for "kubernetes-upgrade-353455", held for 23.311582259s
	I0829 19:16:25.674627   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:16:25.674904   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetIP
	I0829 19:16:25.678309   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.678706   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:25.678734   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.678993   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:16:25.679619   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:16:25.679813   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:16:25.679940   55990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:16:25.679981   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:16:25.680081   55990 ssh_runner.go:195] Run: cat /version.json
	I0829 19:16:25.680114   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:16:25.683272   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.683485   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.683719   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:25.683748   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.683869   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:25.683896   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:25.684100   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:16:25.684118   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:16:25.684330   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:25.684338   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:16:25.684529   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:16:25.684559   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:16:25.684699   55990 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:16:25.684767   55990 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:16:25.767898   55990 ssh_runner.go:195] Run: systemctl --version
	I0829 19:16:25.801970   55990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:16:25.962355   55990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:16:25.967911   55990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:16:25.967976   55990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:16:25.985669   55990 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:16:25.985692   55990 start.go:495] detecting cgroup driver to use...
	I0829 19:16:25.985765   55990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:16:26.004229   55990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:16:26.018699   55990 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:16:26.018752   55990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:16:26.032737   55990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:16:26.046273   55990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:16:26.172021   55990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:16:26.341689   55990 docker.go:233] disabling docker service ...
	I0829 19:16:26.341756   55990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:16:26.359516   55990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:16:26.374952   55990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:16:26.529736   55990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:16:26.672961   55990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:16:26.689938   55990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:16:26.709269   55990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 19:16:26.709353   55990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:16:26.720672   55990 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:16:26.720760   55990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:16:26.732434   55990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:16:26.743136   55990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:16:26.753552   55990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:16:26.766677   55990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:16:26.776662   55990 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:16:26.776824   55990 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:16:26.790283   55990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:16:26.800927   55990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:16:26.941518   55990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:16:27.038450   55990 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:16:27.038526   55990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:16:27.043692   55990 start.go:563] Will wait 60s for crictl version
	I0829 19:16:27.043760   55990 ssh_runner.go:195] Run: which crictl
	I0829 19:16:27.047434   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:16:27.090740   55990 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:16:27.090843   55990 ssh_runner.go:195] Run: crio --version
	I0829 19:16:27.120034   55990 ssh_runner.go:195] Run: crio --version
	I0829 19:16:27.149947   55990 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 19:16:27.151262   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetIP
	I0829 19:16:27.154596   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:27.154988   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:16:16 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:16:27.155015   55990 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:16:27.155250   55990 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 19:16:27.159329   55990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:16:27.171483   55990 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-353455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:16:27.171617   55990 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:16:27.171676   55990 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:16:27.207438   55990 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:16:27.207520   55990 ssh_runner.go:195] Run: which lz4
	I0829 19:16:27.211721   55990 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:16:27.215813   55990 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:16:27.215845   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 19:16:28.748469   55990 crio.go:462] duration metric: took 1.536797833s to copy over tarball
	I0829 19:16:28.748560   55990 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:16:31.418316   55990 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.669710556s)
	I0829 19:16:31.418348   55990 crio.go:469] duration metric: took 2.669844433s to extract the tarball
	I0829 19:16:31.418358   55990 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:16:31.459135   55990 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:16:31.502695   55990 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:16:31.502728   55990 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:16:31.502789   55990 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:16:31.502793   55990 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:16:31.502828   55990 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 19:16:31.502849   55990 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:16:31.502868   55990 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:16:31.502869   55990 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 19:16:31.502915   55990 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:16:31.502874   55990 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:16:31.504200   55990 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 19:16:31.504360   55990 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:16:31.504374   55990 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:16:31.504404   55990 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 19:16:31.504451   55990 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:16:31.504362   55990 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:16:31.504598   55990 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:16:31.504609   55990 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:16:31.728401   55990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:16:31.766287   55990 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 19:16:31.766338   55990 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:16:31.766387   55990 ssh_runner.go:195] Run: which crictl
	I0829 19:16:31.770061   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:16:31.801722   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:16:31.809777   55990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:16:31.811479   55990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 19:16:31.814791   55990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:16:31.828790   55990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 19:16:31.836021   55990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 19:16:31.856079   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:16:31.869810   55990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:16:31.940226   55990 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 19:16:31.940270   55990 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 19:16:31.940323   55990 ssh_runner.go:195] Run: which crictl
	I0829 19:16:31.940424   55990 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 19:16:31.940463   55990 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:16:31.940506   55990 ssh_runner.go:195] Run: which crictl
	I0829 19:16:31.950394   55990 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 19:16:31.950439   55990 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:16:31.950492   55990 ssh_runner.go:195] Run: which crictl
	I0829 19:16:31.991307   55990 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 19:16:31.991348   55990 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 19:16:31.991401   55990 ssh_runner.go:195] Run: which crictl
	I0829 19:16:31.992392   55990 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 19:16:31.992426   55990 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:16:31.992464   55990 ssh_runner.go:195] Run: which crictl
	I0829 19:16:31.995285   55990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 19:16:32.011152   55990 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 19:16:32.011198   55990 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:16:32.011206   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:16:32.011237   55990 ssh_runner.go:195] Run: which crictl
	I0829 19:16:32.011305   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:16:32.011341   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:16:32.011404   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:16:32.011424   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:16:32.083718   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:16:32.083868   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:16:32.122635   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:16:32.122768   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:16:32.131610   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:16:32.131610   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:16:32.160975   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:16:32.161035   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:16:32.249293   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:16:32.280780   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:16:32.280811   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:16:32.283157   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:16:32.283175   55990 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:16:32.283217   55990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 19:16:32.355130   55990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 19:16:32.381779   55990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 19:16:32.381814   55990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 19:16:32.381864   55990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 19:16:32.386961   55990 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 19:16:32.678103   55990 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:16:32.818284   55990 cache_images.go:92] duration metric: took 1.315534171s to LoadCachedImages
	W0829 19:16:32.818354   55990 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0829 19:16:32.818367   55990 kubeadm.go:934] updating node { 192.168.50.102 8443 v1.20.0 crio true true} ...
	I0829 19:16:32.818500   55990 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-353455 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:16:32.818576   55990 ssh_runner.go:195] Run: crio config
	I0829 19:16:32.870756   55990 cni.go:84] Creating CNI manager for ""
	I0829 19:16:32.870779   55990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:16:32.870793   55990 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:16:32.870816   55990 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.102 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-353455 NodeName:kubernetes-upgrade-353455 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 19:16:32.870972   55990 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-353455"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:16:32.871042   55990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 19:16:32.880828   55990 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:16:32.880906   55990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:16:32.890655   55990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0829 19:16:32.908548   55990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:16:32.924839   55990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0829 19:16:32.940333   55990 ssh_runner.go:195] Run: grep 192.168.50.102	control-plane.minikube.internal$ /etc/hosts
	I0829 19:16:32.944087   55990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:16:32.955835   55990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:16:33.085428   55990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:16:33.102547   55990 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455 for IP: 192.168.50.102
	I0829 19:16:33.102571   55990 certs.go:194] generating shared ca certs ...
	I0829 19:16:33.102591   55990 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:16:33.102761   55990 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:16:33.102815   55990 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:16:33.102826   55990 certs.go:256] generating profile certs ...
	I0829 19:16:33.102891   55990 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/client.key
	I0829 19:16:33.102910   55990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/client.crt with IP's: []
	I0829 19:16:33.233735   55990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/client.crt ...
	I0829 19:16:33.233765   55990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/client.crt: {Name:mkf2f4dc3d90f4ca763e4406390ecfe909009fc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:16:33.233939   55990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/client.key ...
	I0829 19:16:33.233952   55990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/client.key: {Name:mkf504b02c764c44a59373ddb4847eb1de878be2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:16:33.234027   55990 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.key.d93ce222
	I0829 19:16:33.234042   55990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.crt.d93ce222 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.102]
	I0829 19:16:33.324042   55990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.crt.d93ce222 ...
	I0829 19:16:33.324070   55990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.crt.d93ce222: {Name:mkf73098924434426d881db9e42ad95183b009af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:16:33.324251   55990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.key.d93ce222 ...
	I0829 19:16:33.324271   55990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.key.d93ce222: {Name:mk0dc7f907c1b642b51b04b38687e29cac5efbba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:16:33.324365   55990 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.crt.d93ce222 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.crt
	I0829 19:16:33.324454   55990 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.key.d93ce222 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.key
	I0829 19:16:33.324522   55990 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.key
	I0829 19:16:33.324539   55990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.crt with IP's: []
	I0829 19:16:33.455765   55990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.crt ...
	I0829 19:16:33.455794   55990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.crt: {Name:mk9e0f82afe8cca597b747d6d1b6a2467e98f8fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:16:33.455949   55990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.key ...
	I0829 19:16:33.455966   55990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.key: {Name:mk07727600a07950ac93d2f5024bd20572e7e20c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:16:33.456168   55990 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:16:33.456224   55990 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:16:33.456235   55990 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:16:33.456270   55990 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:16:33.456303   55990 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:16:33.456335   55990 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:16:33.456396   55990 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:16:33.457016   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:16:33.486558   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:16:33.514291   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:16:33.542159   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:16:33.568982   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0829 19:16:33.595209   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:16:33.619215   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:16:33.643064   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:16:33.666538   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:16:33.690981   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:16:33.717305   55990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:16:33.742642   55990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:16:33.761124   55990 ssh_runner.go:195] Run: openssl version
	I0829 19:16:33.766962   55990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:16:33.777366   55990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:16:33.782105   55990 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:16:33.782174   55990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:16:33.787957   55990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:16:33.800403   55990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:16:33.817401   55990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:16:33.822010   55990 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:16:33.822068   55990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:16:33.828003   55990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:16:33.838224   55990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:16:33.848144   55990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:16:33.852405   55990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:16:33.852463   55990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:16:33.857620   55990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:16:33.867468   55990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:16:33.871256   55990 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 19:16:33.871304   55990 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-353455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:16:33.871397   55990 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:16:33.871450   55990 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:16:33.907223   55990 cri.go:89] found id: ""
	I0829 19:16:33.907288   55990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:16:33.920188   55990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:16:33.949186   55990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:16:33.966851   55990 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:16:33.966877   55990 kubeadm.go:157] found existing configuration files:
	
	I0829 19:16:33.966928   55990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:16:33.979826   55990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:16:33.979896   55990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:16:33.993864   55990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:16:34.006980   55990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:16:34.007057   55990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:16:34.022159   55990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:16:34.032133   55990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:16:34.032207   55990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:16:34.041731   55990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:16:34.052511   55990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:16:34.052582   55990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:16:34.063832   55990 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:16:34.184740   55990 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:16:34.184827   55990 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:16:34.338049   55990 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:16:34.338254   55990 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:16:34.338392   55990 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:16:34.524431   55990 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:16:34.660769   55990 out.go:235]   - Generating certificates and keys ...
	I0829 19:16:34.660894   55990 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:16:34.660992   55990 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:16:34.900988   55990 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 19:16:35.067196   55990 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 19:16:35.211763   55990 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 19:16:35.432540   55990 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 19:16:35.491031   55990 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 19:16:35.491187   55990 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-353455 localhost] and IPs [192.168.50.102 127.0.0.1 ::1]
	I0829 19:16:35.599169   55990 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 19:16:35.599568   55990 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-353455 localhost] and IPs [192.168.50.102 127.0.0.1 ::1]
	I0829 19:16:35.881322   55990 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 19:16:36.018872   55990 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 19:16:36.270284   55990 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 19:16:36.270384   55990 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:16:36.509648   55990 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:16:36.640534   55990 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:16:36.983630   55990 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:16:37.068614   55990 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:16:37.088681   55990 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:16:37.090654   55990 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:16:37.090741   55990 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:16:37.243445   55990 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:16:37.244809   55990 out.go:235]   - Booting up control plane ...
	I0829 19:16:37.244943   55990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:16:37.250700   55990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:16:37.259545   55990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:16:37.260676   55990 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:16:37.266777   55990 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:17:17.260106   55990 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:17:17.260598   55990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:17:17.260888   55990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:17:22.261237   55990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:17:22.261515   55990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:17:32.260266   55990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:17:32.260504   55990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:17:52.259996   55990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:17:52.260292   55990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:18:32.261246   55990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:18:32.261533   55990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:18:32.261558   55990 kubeadm.go:310] 
	I0829 19:18:32.261624   55990 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:18:32.261697   55990 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:18:32.261709   55990 kubeadm.go:310] 
	I0829 19:18:32.261760   55990 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:18:32.261815   55990 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:18:32.261958   55990 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:18:32.261995   55990 kubeadm.go:310] 
	I0829 19:18:32.262132   55990 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:18:32.262178   55990 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:18:32.262221   55990 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:18:32.262230   55990 kubeadm.go:310] 
	I0829 19:18:32.262362   55990 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:18:32.262506   55990 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:18:32.262516   55990 kubeadm.go:310] 
	I0829 19:18:32.262666   55990 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:18:32.262791   55990 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:18:32.262890   55990 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:18:32.262986   55990 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:18:32.262996   55990 kubeadm.go:310] 
	I0829 19:18:32.264378   55990 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:18:32.264490   55990 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:18:32.264613   55990 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 19:18:32.264747   55990 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-353455 localhost] and IPs [192.168.50.102 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-353455 localhost] and IPs [192.168.50.102 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-353455 localhost] and IPs [192.168.50.102 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-353455 localhost] and IPs [192.168.50.102 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 19:18:32.264801   55990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:18:33.029488   55990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:18:33.043346   55990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:18:33.053003   55990 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:18:33.053028   55990 kubeadm.go:157] found existing configuration files:
	
	I0829 19:18:33.053079   55990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:18:33.062109   55990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:18:33.062184   55990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:18:33.071362   55990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:18:33.080332   55990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:18:33.080403   55990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:18:33.089536   55990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:18:33.098873   55990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:18:33.098941   55990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:18:33.108253   55990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:18:33.117991   55990 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:18:33.118058   55990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:18:33.128006   55990 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:18:33.192897   55990 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:18:33.192954   55990 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:18:33.332216   55990 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:18:33.332396   55990 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:18:33.332543   55990 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:18:33.510146   55990 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:18:33.512134   55990 out.go:235]   - Generating certificates and keys ...
	I0829 19:18:33.512233   55990 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:18:33.512318   55990 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:18:33.512416   55990 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:18:33.512526   55990 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:18:33.512624   55990 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:18:33.512701   55990 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:18:33.512787   55990 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:18:33.512883   55990 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:18:33.513431   55990 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:18:33.514018   55990 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:18:33.514163   55990 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:18:33.514243   55990 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:18:33.701810   55990 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:18:33.835895   55990 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:18:33.941140   55990 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:18:34.130025   55990 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:18:34.143739   55990 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:18:34.145863   55990 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:18:34.145953   55990 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:18:34.269152   55990 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:18:34.271201   55990 out.go:235]   - Booting up control plane ...
	I0829 19:18:34.271342   55990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:18:34.276051   55990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:18:34.276924   55990 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:18:34.285021   55990 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:18:34.288499   55990 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:19:14.291825   55990 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:19:14.291930   55990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:19:14.292161   55990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:19:19.293349   55990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:19:19.293637   55990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:19:29.294591   55990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:19:29.294884   55990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:19:49.293978   55990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:19:49.294303   55990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:20:29.293629   55990 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:20:29.294356   55990 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:20:29.294381   55990 kubeadm.go:310] 
	I0829 19:20:29.294438   55990 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:20:29.294477   55990 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:20:29.294486   55990 kubeadm.go:310] 
	I0829 19:20:29.294527   55990 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:20:29.294563   55990 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:20:29.294684   55990 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:20:29.294694   55990 kubeadm.go:310] 
	I0829 19:20:29.294806   55990 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:20:29.294850   55990 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:20:29.294890   55990 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:20:29.294901   55990 kubeadm.go:310] 
	I0829 19:20:29.295022   55990 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:20:29.295101   55990 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:20:29.295105   55990 kubeadm.go:310] 
	I0829 19:20:29.295255   55990 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:20:29.295371   55990 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:20:29.295467   55990 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:20:29.295550   55990 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:20:29.295558   55990 kubeadm.go:310] 
	I0829 19:20:29.296661   55990 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:20:29.296788   55990 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:20:29.296883   55990 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 19:20:29.296961   55990 kubeadm.go:394] duration metric: took 3m55.425659641s to StartCluster
	I0829 19:20:29.297002   55990 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:20:29.297058   55990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:20:29.344631   55990 cri.go:89] found id: ""
	I0829 19:20:29.344660   55990 logs.go:276] 0 containers: []
	W0829 19:20:29.344670   55990 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:20:29.344678   55990 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:20:29.344742   55990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:20:29.385046   55990 cri.go:89] found id: ""
	I0829 19:20:29.385069   55990 logs.go:276] 0 containers: []
	W0829 19:20:29.385077   55990 logs.go:278] No container was found matching "etcd"
	I0829 19:20:29.385083   55990 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:20:29.385131   55990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:20:29.424253   55990 cri.go:89] found id: ""
	I0829 19:20:29.424284   55990 logs.go:276] 0 containers: []
	W0829 19:20:29.424295   55990 logs.go:278] No container was found matching "coredns"
	I0829 19:20:29.424303   55990 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:20:29.424362   55990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:20:29.469207   55990 cri.go:89] found id: ""
	I0829 19:20:29.469234   55990 logs.go:276] 0 containers: []
	W0829 19:20:29.469246   55990 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:20:29.469253   55990 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:20:29.469322   55990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:20:29.527944   55990 cri.go:89] found id: ""
	I0829 19:20:29.527973   55990 logs.go:276] 0 containers: []
	W0829 19:20:29.527983   55990 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:20:29.527990   55990 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:20:29.528053   55990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:20:29.561651   55990 cri.go:89] found id: ""
	I0829 19:20:29.561676   55990 logs.go:276] 0 containers: []
	W0829 19:20:29.561686   55990 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:20:29.561693   55990 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:20:29.561745   55990 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:20:29.605159   55990 cri.go:89] found id: ""
	I0829 19:20:29.605191   55990 logs.go:276] 0 containers: []
	W0829 19:20:29.605202   55990 logs.go:278] No container was found matching "kindnet"
	I0829 19:20:29.605213   55990 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:20:29.605234   55990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:20:29.770500   55990 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:20:29.770546   55990 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:20:29.770562   55990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:20:29.891301   55990 logs.go:123] Gathering logs for container status ...
	I0829 19:20:29.891348   55990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:20:29.940499   55990 logs.go:123] Gathering logs for kubelet ...
	I0829 19:20:29.940529   55990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:20:30.014259   55990 logs.go:123] Gathering logs for dmesg ...
	I0829 19:20:30.014298   55990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0829 19:20:30.034210   55990 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 19:20:30.034263   55990 out.go:270] * 
	* 
	W0829 19:20:30.034328   55990 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:20:30.034346   55990 out.go:270] * 
	* 
	W0829 19:20:30.035368   55990 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:20:30.038881   55990 out.go:201] 
	W0829 19:20:30.040102   55990 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:20:30.040202   55990 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 19:20:30.040238   55990 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 19:20:30.042466   55990 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-353455 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-353455
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-353455: (6.619831044s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-353455 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-353455 status --format={{.Host}}: exit status 7 (65.411831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-353455 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-353455 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.77183306s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-353455 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-353455 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-353455 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (86.080836ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-353455] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-353455
	    minikube start -p kubernetes-upgrade-353455 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3534552 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-353455 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-353455 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-353455 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m13.169226454s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-29 19:22:46.902402891 +0000 UTC m=+4639.623195677
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-353455 -n kubernetes-upgrade-353455
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-353455 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-353455 logs -n 25: (1.602519451s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-633326 sudo                  | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                  | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                  | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo cat              | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo cat              | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                  | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                  | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                  | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo find             | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo crio             | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-633326                       | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC | 29 Aug 24 19:19 UTC |
	| start   | -p cert-options-034564                 | cert-options-034564       | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC | 29 Aug 24 19:21 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-353455           | kubernetes-upgrade-353455 | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:20 UTC |
	| start   | -p kubernetes-upgrade-353455           | kubernetes-upgrade-353455 | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:21 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-523972 ssh cat      | force-systemd-flag-523972 | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:20 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-523972           | force-systemd-flag-523972 | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:20 UTC |
	| start   | -p pause-518621 --memory=2048          | pause-518621              | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:22 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-034564 ssh                | cert-options-034564       | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC | 29 Aug 24 19:21 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-034564 -- sudo         | cert-options-034564       | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC | 29 Aug 24 19:21 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-034564                 | cert-options-034564       | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC | 29 Aug 24 19:21 UTC |
	| start   | -p auto-633326 --memory=3072           | auto-633326               | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-353455           | kubernetes-upgrade-353455 | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-353455           | kubernetes-upgrade-353455 | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC | 29 Aug 24 19:22 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-518621                        | pause-518621              | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p pause-518621                        | pause-518621              | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:22:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:22:06.543047   64307 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:22:06.543136   64307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:22:06.543143   64307 out.go:358] Setting ErrFile to fd 2...
	I0829 19:22:06.543147   64307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:22:06.543333   64307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:22:06.543919   64307 out.go:352] Setting JSON to false
	I0829 19:22:06.544855   64307 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7474,"bootTime":1724951853,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:22:06.544918   64307 start.go:139] virtualization: kvm guest
	I0829 19:22:06.547020   64307 out.go:177] * [pause-518621] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:22:06.548267   64307 notify.go:220] Checking for updates...
	I0829 19:22:06.548290   64307 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:22:06.549457   64307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:22:06.550545   64307 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:22:06.551572   64307 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:22:06.552629   64307 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:22:06.553879   64307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:22:06.555449   64307 config.go:182] Loaded profile config "pause-518621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:22:06.556008   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:06.556072   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:06.571521   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I0829 19:22:06.572004   64307 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:06.572569   64307 main.go:141] libmachine: Using API Version  1
	I0829 19:22:06.572593   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:06.572979   64307 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:06.573186   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:06.573422   64307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:22:06.573774   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:06.573811   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:06.588552   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0829 19:22:06.589111   64307 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:06.589660   64307 main.go:141] libmachine: Using API Version  1
	I0829 19:22:06.589684   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:06.590034   64307 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:06.590283   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:06.626910   64307 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:22:06.628114   64307 start.go:297] selected driver: kvm2
	I0829 19:22:06.628134   64307 start.go:901] validating driver "kvm2" against &{Name:pause-518621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-518621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:06.628330   64307 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:22:06.628800   64307 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:22:06.628902   64307 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:22:06.644356   64307 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:22:06.645155   64307 cni.go:84] Creating CNI manager for ""
	I0829 19:22:06.645172   64307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:22:06.645238   64307 start.go:340] cluster config:
	{Name:pause-518621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-518621 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:06.645392   64307 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:22:06.647209   64307 out.go:177] * Starting "pause-518621" primary control-plane node in "pause-518621" cluster
	I0829 19:22:06.648577   64307 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:22:06.648622   64307 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:22:06.648630   64307 cache.go:56] Caching tarball of preloaded images
	I0829 19:22:06.648726   64307 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:22:06.648739   64307 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:22:06.648910   64307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/pause-518621/config.json ...
	I0829 19:22:06.649147   64307 start.go:360] acquireMachinesLock for pause-518621: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:22:08.347255   64307 start.go:364] duration metric: took 1.698077985s to acquireMachinesLock for "pause-518621"
	I0829 19:22:08.347323   64307 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:22:08.347332   64307 fix.go:54] fixHost starting: 
	I0829 19:22:08.347776   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:08.347825   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:08.368493   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0829 19:22:08.368962   64307 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:08.369484   64307 main.go:141] libmachine: Using API Version  1
	I0829 19:22:08.369509   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:08.369874   64307 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:08.370063   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:08.370236   64307 main.go:141] libmachine: (pause-518621) Calling .GetState
	I0829 19:22:08.371946   64307 fix.go:112] recreateIfNeeded on pause-518621: state=Running err=<nil>
	W0829 19:22:08.371976   64307 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:22:08.550441   64307 out.go:177] * Updating the running kvm2 "pause-518621" VM ...
	I0829 19:22:08.114559   63960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:22:08.114586   63960 machine.go:96] duration metric: took 6.693314723s to provisionDockerMachine
	I0829 19:22:08.114598   63960 start.go:293] postStartSetup for "kubernetes-upgrade-353455" (driver="kvm2")
	I0829 19:22:08.114607   63960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:22:08.114626   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.115022   63960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:22:08.115049   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:22:08.118095   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.118498   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.118529   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.118720   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:22:08.118905   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.119118   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:22:08.119320   63960 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:22:08.200131   63960 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:22:08.203930   63960 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:22:08.203953   63960 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:22:08.204015   63960 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:22:08.204112   63960 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:22:08.204234   63960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:22:08.213344   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:22:08.239778   63960 start.go:296] duration metric: took 125.16719ms for postStartSetup
	I0829 19:22:08.239819   63960 fix.go:56] duration metric: took 6.844921079s for fixHost
	I0829 19:22:08.239848   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:22:08.243125   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.243470   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.243500   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.243637   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:22:08.243812   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.244002   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.244175   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:22:08.244350   63960 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.244514   63960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0829 19:22:08.244530   63960 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:22:08.347128   63960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959328.335613934
	
	I0829 19:22:08.347147   63960 fix.go:216] guest clock: 1724959328.335613934
	I0829 19:22:08.347154   63960 fix.go:229] Guest: 2024-08-29 19:22:08.335613934 +0000 UTC Remote: 2024-08-29 19:22:08.239823526 +0000 UTC m=+34.502528738 (delta=95.790408ms)
	I0829 19:22:08.347171   63960 fix.go:200] guest clock delta is within tolerance: 95.790408ms
	I0829 19:22:08.347176   63960 start.go:83] releasing machines lock for "kubernetes-upgrade-353455", held for 6.952310233s
	I0829 19:22:08.347198   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.347465   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetIP
	I0829 19:22:08.350559   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.350972   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.351001   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.351129   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.351658   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.351847   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.351951   63960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:22:08.352005   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:22:08.352074   63960 ssh_runner.go:195] Run: cat /version.json
	I0829 19:22:08.352094   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:22:08.354669   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.355065   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.355102   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.355145   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.355405   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:22:08.355603   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.355622   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.355637   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.355759   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:22:08.355884   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:22:08.355923   63960 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:22:08.356454   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.356634   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:22:08.356766   63960 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:22:08.474033   63960 ssh_runner.go:195] Run: systemctl --version
	I0829 19:22:08.480891   63960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:22:08.646744   63960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:22:08.652962   63960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:22:08.653033   63960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:22:08.662404   63960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 19:22:08.662428   63960 start.go:495] detecting cgroup driver to use...
	I0829 19:22:08.662501   63960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:22:08.679704   63960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:22:08.693171   63960 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:22:08.693246   63960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:22:08.707627   63960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:22:08.722664   63960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:22:04.162844   63595 crio.go:462] duration metric: took 1.242688236s to copy over tarball
	I0829 19:22:04.162951   63595 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:22:06.319132   63595 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.156145348s)
	I0829 19:22:06.319163   63595 crio.go:469] duration metric: took 2.15628063s to extract the tarball
	I0829 19:22:06.319170   63595 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:22:06.358038   63595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:22:06.404153   63595 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:22:06.404174   63595 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:22:06.404184   63595 kubeadm.go:934] updating node { 192.168.72.204 8443 v1.31.0 crio true true} ...
	I0829 19:22:06.404300   63595 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-633326 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:auto-633326 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:22:06.404380   63595 ssh_runner.go:195] Run: crio config
	I0829 19:22:06.452166   63595 cni.go:84] Creating CNI manager for ""
	I0829 19:22:06.452189   63595 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:22:06.452206   63595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:22:06.452234   63595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.204 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-633326 NodeName:auto-633326 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:22:06.452430   63595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-633326"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:22:06.452501   63595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:22:06.462366   63595 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:22:06.462445   63595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:22:06.471649   63595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0829 19:22:06.489823   63595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:22:06.506237   63595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0829 19:22:06.524914   63595 ssh_runner.go:195] Run: grep 192.168.72.204	control-plane.minikube.internal$ /etc/hosts
	I0829 19:22:06.529063   63595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:22:06.542543   63595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:22:06.665775   63595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:22:06.682477   63595 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326 for IP: 192.168.72.204
	I0829 19:22:06.682502   63595 certs.go:194] generating shared ca certs ...
	I0829 19:22:06.682522   63595 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.682692   63595 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:22:06.682746   63595 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:22:06.682760   63595 certs.go:256] generating profile certs ...
	I0829 19:22:06.682822   63595 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.key
	I0829 19:22:06.682841   63595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt with IP's: []
	I0829 19:22:06.886677   63595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt ...
	I0829 19:22:06.886705   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: {Name:mk41f64f3a6ddca4ed8bd76984b3aabccc2281b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.886860   63595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.key ...
	I0829 19:22:06.886870   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.key: {Name:mke01efa75415e3f69863e323c0bb09f3a6c88b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.886944   63595 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key.b46938b6
	I0829 19:22:06.886958   63595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt.b46938b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.204]
	I0829 19:22:06.975367   63595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt.b46938b6 ...
	I0829 19:22:06.975395   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt.b46938b6: {Name:mk0921353250c97cd41cc56849feb45129d92a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.975545   63595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key.b46938b6 ...
	I0829 19:22:06.975557   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key.b46938b6: {Name:mk7ec497edc365eec664d690e74cb1682a30c355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.975637   63595 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt.b46938b6 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt
	I0829 19:22:06.975720   63595 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key.b46938b6 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key
	I0829 19:22:06.975776   63595 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.key
	I0829 19:22:06.975789   63595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.crt with IP's: []
	I0829 19:22:07.066728   63595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.crt ...
	I0829 19:22:07.066766   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.crt: {Name:mkd471572d263df053b52e4ac3de60fd35c451b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:07.066961   63595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.key ...
	I0829 19:22:07.066983   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.key: {Name:mke117647a722fca5d6b277e25571334a48c88ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:07.067156   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:22:07.067189   63595 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:22:07.067198   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:22:07.067219   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:22:07.067241   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:22:07.067262   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:22:07.067297   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:22:07.067919   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:22:07.092100   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:22:07.115710   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:22:07.139686   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:22:07.161866   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0829 19:22:07.184987   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 19:22:07.209673   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:22:07.232488   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:22:07.256030   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:22:07.278640   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:22:07.302359   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:22:07.327477   63595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:22:07.346983   63595 ssh_runner.go:195] Run: openssl version
	I0829 19:22:07.352825   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:22:07.371752   63595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:22:07.381442   63595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:22:07.381513   63595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:22:07.390314   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:22:07.405041   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:22:07.415262   63595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:07.419452   63595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:07.419504   63595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:07.425343   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:22:07.436289   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:22:07.451616   63595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:22:07.456249   63595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:22:07.456318   63595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:22:07.462273   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:22:07.476300   63595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:22:07.480866   63595 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 19:22:07.480924   63595 kubeadm.go:392] StartCluster: {Name:auto-633326 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:auto-633326 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.204 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:07.481016   63595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:22:07.481072   63595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:22:07.524915   63595 cri.go:89] found id: ""
	I0829 19:22:07.524996   63595 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:22:07.535128   63595 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:22:07.545781   63595 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:22:07.555227   63595 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:22:07.555247   63595 kubeadm.go:157] found existing configuration files:
	
	I0829 19:22:07.555296   63595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:22:07.564299   63595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:22:07.564371   63595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:22:07.576585   63595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:22:07.587693   63595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:22:07.587752   63595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:22:07.597305   63595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:22:07.607228   63595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:22:07.607276   63595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:22:07.618149   63595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:22:07.626961   63595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:22:07.627021   63595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:22:07.636514   63595 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:22:07.690607   63595 kubeadm.go:310] W0829 19:22:07.674107     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:22:07.691595   63595 kubeadm.go:310] W0829 19:22:07.675323     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:22:07.803110   63595 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:22:08.881843   63960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:22:09.078449   63960 docker.go:233] disabling docker service ...
	I0829 19:22:09.078533   63960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:22:09.097735   63960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:22:09.114065   63960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:22:09.264241   63960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:22:09.414749   63960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:22:09.430676   63960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:22:09.451678   63960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:22:09.451745   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.462248   63960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:22:09.462329   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.475080   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.486878   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.509052   63960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:22:09.520518   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.533169   63960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.547813   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.560086   63960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:22:09.571848   63960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:22:09.581914   63960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:22:09.744131   63960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:22:10.620391   63960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:22:10.620469   63960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:22:10.625940   63960 start.go:563] Will wait 60s for crictl version
	I0829 19:22:10.626010   63960 ssh_runner.go:195] Run: which crictl
	I0829 19:22:10.629569   63960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:22:10.676127   63960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:22:10.676218   63960 ssh_runner.go:195] Run: crio --version
	I0829 19:22:10.713956   63960 ssh_runner.go:195] Run: crio --version
	I0829 19:22:10.747555   63960 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:22:08.722026   64307 machine.go:93] provisionDockerMachine start ...
	I0829 19:22:08.722069   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:08.722439   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:08.726052   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.726492   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:08.726524   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.726729   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:08.726937   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.727124   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.727292   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:08.727554   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.727786   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:08.727802   64307 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:22:08.843036   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-518621
	
	I0829 19:22:08.843068   64307 main.go:141] libmachine: (pause-518621) Calling .GetMachineName
	I0829 19:22:08.843321   64307 buildroot.go:166] provisioning hostname "pause-518621"
	I0829 19:22:08.843350   64307 main.go:141] libmachine: (pause-518621) Calling .GetMachineName
	I0829 19:22:08.843539   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:08.846965   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.847413   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:08.847437   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.847621   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:08.847834   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.847964   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.848145   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:08.848330   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.848533   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:08.848548   64307 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-518621 && echo "pause-518621" | sudo tee /etc/hostname
	I0829 19:22:08.976480   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-518621
	
	I0829 19:22:08.976511   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:08.979685   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.980082   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:08.980117   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.980399   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:08.980639   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.980819   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.980959   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:08.981169   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.981413   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:08.981469   64307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-518621' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-518621/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-518621' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:22:09.083539   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:22:09.083574   64307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:22:09.083617   64307 buildroot.go:174] setting up certificates
	I0829 19:22:09.083631   64307 provision.go:84] configureAuth start
	I0829 19:22:09.083641   64307 main.go:141] libmachine: (pause-518621) Calling .GetMachineName
	I0829 19:22:09.083917   64307 main.go:141] libmachine: (pause-518621) Calling .GetIP
	I0829 19:22:09.086993   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.087524   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.087577   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.087752   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:09.090258   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.090527   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.090555   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.090675   64307 provision.go:143] copyHostCerts
	I0829 19:22:09.090733   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:22:09.090746   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:22:09.162320   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:22:09.162489   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:22:09.162502   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:22:09.162543   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:22:09.162620   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:22:09.162629   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:22:09.162660   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:22:09.162723   64307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.pause-518621 san=[127.0.0.1 192.168.61.203 localhost minikube pause-518621]
	I0829 19:22:09.520291   64307 provision.go:177] copyRemoteCerts
	I0829 19:22:09.520373   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:22:09.520413   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:09.523620   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.523990   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.524022   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.524271   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:09.524511   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:09.524733   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:09.524894   64307 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/pause-518621/id_rsa Username:docker}
	I0829 19:22:09.611312   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:22:09.639602   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:22:09.669692   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0829 19:22:09.705901   64307 provision.go:87] duration metric: took 622.256236ms to configureAuth
	I0829 19:22:09.705938   64307 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:22:09.706215   64307 config.go:182] Loaded profile config "pause-518621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:22:09.706332   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:09.709310   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.709726   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.709758   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.709943   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:09.710159   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:09.710330   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:09.710539   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:09.710714   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:09.710910   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:09.710932   64307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:22:10.748883   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetIP
	I0829 19:22:10.751623   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:10.752057   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:10.752087   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:10.752309   63960 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 19:22:10.756938   63960 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-353455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:22:10.757043   63960 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:22:10.757102   63960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:22:10.797885   63960 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:22:10.797914   63960 crio.go:433] Images already preloaded, skipping extraction
	I0829 19:22:10.797972   63960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:22:10.833343   63960 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:22:10.833366   63960 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:22:10.833375   63960 kubeadm.go:934] updating node { 192.168.50.102 8443 v1.31.0 crio true true} ...
	I0829 19:22:10.833500   63960 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-353455 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:22:10.833584   63960 ssh_runner.go:195] Run: crio config
	I0829 19:22:11.082681   63960 cni.go:84] Creating CNI manager for ""
	I0829 19:22:11.082717   63960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:22:11.082738   63960 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:22:11.082778   63960 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.102 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-353455 NodeName:kubernetes-upgrade-353455 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:22:11.082981   63960 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-353455"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:22:11.083081   63960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:22:11.181053   63960 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:22:11.181145   63960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:22:11.227656   63960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0829 19:22:11.352976   63960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:22:11.486618   63960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0829 19:22:11.589619   63960 ssh_runner.go:195] Run: grep 192.168.50.102	control-plane.minikube.internal$ /etc/hosts
	I0829 19:22:11.609228   63960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:22:11.948411   63960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:22:11.985258   63960 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455 for IP: 192.168.50.102
	I0829 19:22:11.985287   63960 certs.go:194] generating shared ca certs ...
	I0829 19:22:11.985309   63960 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:11.985534   63960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:22:11.985616   63960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:22:11.985633   63960 certs.go:256] generating profile certs ...
	I0829 19:22:11.985768   63960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/client.key
	I0829 19:22:11.985846   63960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.key.d93ce222
	I0829 19:22:11.985899   63960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.key
	I0829 19:22:11.986046   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:22:11.986117   63960 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:22:11.986131   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:22:11.986167   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:22:11.986214   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:22:11.986243   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:22:11.986311   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:22:11.991503   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:22:12.046976   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:22:12.162255   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:22:12.211953   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:22:12.244188   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0829 19:22:12.272134   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:22:12.302541   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:22:12.334884   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:22:12.394205   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:22:12.445842   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:22:12.499700   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:22:12.587552   63960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:22:12.654436   63960 ssh_runner.go:195] Run: openssl version
	I0829 19:22:12.670724   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:22:12.685960   63960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:12.690505   63960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:12.690567   63960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:12.698450   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:22:12.710185   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:22:12.723972   63960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:22:12.730197   63960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:22:12.730259   63960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:22:12.737837   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:22:12.748838   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:22:12.761782   63960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:22:12.766511   63960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:22:12.766573   63960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:22:12.772918   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:22:12.784575   63960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:22:12.788891   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:22:12.794996   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:22:12.800752   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:22:12.806803   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:22:12.812096   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:22:12.817499   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:22:12.823087   63960 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-353455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:12.823187   63960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:22:12.823257   63960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:22:12.896542   63960 cri.go:89] found id: "c7e0f44fc45d81ca34fca662b700345a22ba250ea849d66ac39f08b6e8a456f2"
	I0829 19:22:12.896570   63960 cri.go:89] found id: "c8cd73d6720d1b99801804e1b45a1e66c095ecc42c32b8e31be70e94ba9c4ac6"
	I0829 19:22:12.896577   63960 cri.go:89] found id: "1a78648db20de6182de8323fe55bf66304a08df29615812edc781b6f181bda98"
	I0829 19:22:12.896582   63960 cri.go:89] found id: "d390a833f84a3199f7a1e4020b262916b76f50a210ff2ee2a9ab18fd2786fc5d"
	I0829 19:22:12.896604   63960 cri.go:89] found id: "212a7de66df56120f26702b7d4288eeb909fc5d28bef83eec75437e632a1cfa2"
	I0829 19:22:12.896609   63960 cri.go:89] found id: "cf1d139c8dd93ae59eb53ed1b75cacf6052fdfa08ab988f2a806088370223dd0"
	I0829 19:22:12.896615   63960 cri.go:89] found id: "008c80c30cf67f6babdc10990eef1bdff506ebd2c0b40298813292b0cc269ebd"
	I0829 19:22:12.896619   63960 cri.go:89] found id: "0cd01fd8b57cf8f4e4b611390b809d76c0d79dfe675a582f411a5b6853b0ac5c"
	I0829 19:22:12.896623   63960 cri.go:89] found id: "b089c64d036f0349d5af067696bc01f28fb421669b56528167c94d2f0fc02808"
	I0829 19:22:12.896632   63960 cri.go:89] found id: "36b3fb146d05a158f24dab08aa4d54f194eeeaa0402b864428388d48c52e1073"
	I0829 19:22:12.896640   63960 cri.go:89] found id: "285e5ee3c69a9ecc036b0e95fe246a25aff705e9f2394440563359ff587bada7"
	I0829 19:22:12.896644   63960 cri.go:89] found id: "f537dc7a1b4a4b62a16b3dad35ee2633093b730e986c2461d312b3c7cc39dc90"
	I0829 19:22:12.896651   63960 cri.go:89] found id: "e05767faee629c2756c35722878496839934351dda4ee2bd3838c2986c7fcf3e"
	I0829 19:22:12.896655   63960 cri.go:89] found id: "9b563857dc4d3fa049193ff55c4f1810290a0b471d1a76434f996f7cbbf2df86"
	I0829 19:22:12.896663   63960 cri.go:89] found id: "214cbd72e3eb481ba9580536acabdc6d3bb6bf3a248a6cac0ad64a5149a1b4eb"
	I0829 19:22:12.896670   63960 cri.go:89] found id: ""
	I0829 19:22:12.896756   63960 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.213333342Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1036f68b-9541-4352-9c57-24ebe8ed1c29 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.214887214Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea9f4343-5089-4f18-ad28-d81d5896ff4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.215536866Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959368215505834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea9f4343-5089-4f18-ad28-d81d5896ff4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.216276745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4643e2f-a888-4bf6-bf72-86e4f85ee63f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.216368813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4643e2f-a888-4bf6-bf72-86e4f85ee63f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.217250970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41cb33bf1862d18d2e61300ab7a03eed124f9e097b728b4dee9f3e0e07853bfa,PodSandboxId:88ce4d3a6dde8fa615685ecf1d8e745200e10f3fbdcd6b882027dfe82bcf1bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959364414449937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-slhgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094d8fa9-7273-4418-8c0b-8c0e626e84c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678563060dd8f85c5ee782b40fc67a20ff5a1e445b978e0d5ee0dd69816ccf6,PodSandboxId:0f5a7d5dfdc0c174a8683cc40b7cfd2f714e9e4c995088c6f55929c47bd1c01a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959364401211083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vh8gg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: dcbff43a-b4b7-4fce-8898-630b8ad5ab3e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f133f6463690dd15bba7da278412436fa872f343c5f722038dddaecdac15e72b,PodSandboxId:e407d6bfda9cb1f991b74fc75be6d1a0ab6384704d961d541121da55c6225591,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_EXITED,CreatedAt:1724959364385715113,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea28c0f8-becd-4289-be1d-1a2ee7c649f3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8a0223d040e31af5d2796409d8ef34a095691c2aa9b2b0ce5f71c41b22be9b,PodSandboxId:fa3caec20b349a9a46a785e085ab3b335de0b473ec320006d7eb731c38acbf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNIN
G,CreatedAt:1724959360594608755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97bf050bc1d8c52effe065b7bf80e5f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56faebaaa3d5a20d789c1ecedc89011c3f614ada0ccf0ae1c4a68f9cd223a687,PodSandboxId:969ba7529116cf64276011f2f23003fd7101680654e01d6ddf2e6be3571e714b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,Cre
atedAt:1724959360603480062,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ed7aa9b84562bf2b23cef2f8d68c11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b31107f20ba69a54e31e867ef89cbe5f24d4f61974b851674c736da276953753,PodSandboxId:660c6f6820684b2803663e3ae14c3da038948fecc02fac3222f890ad8d1962c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING
,CreatedAt:1724959360574678251,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13ab2db04965b7fbdf470dfbea33600c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6ab33b0957833fc60a863d02363c414c90412d3278071931f60c131d281816,PodSandboxId:d316d6ebcab81373423414bed94841c87399f5bdee4900240eb8fdcd3819ec2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNN
ING,CreatedAt:1724959360568718167,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f9ccac77344020e79fc28d16b60036,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e60e124253cbde693be91907407dcaf0037eae2683ada1014cf32c77523a00,PodSandboxId:9ef556f9f6addd8d415ae8494b513c40fe38ceef6bcc01a247ceed6272bb3c11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495933152822
7001,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2rvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67c002b2-c432-4972-a8d6-efe417f15e76,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7e0f44fc45d81ca34fca662b700345a22ba250ea849d66ac39f08b6e8a456f2,PodSandboxId:88ce4d3a6dde8fa615685ecf1d8e745200e10f3fbdcd6b882027dfe82bcf1bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959332526877093,Labels:map[string]string{io.kuber
netes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-slhgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094d8fa9-7273-4418-8c0b-8c0e626e84c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8cd73d6720d1b99801804e1b45a1e66c095ecc42c32b8e31be70e94ba9c4ac6,PodSandboxId:0f5a7d5dfdc0c174a8683cc40b7cfd2f714e9e4c995088c6f55929c47bd1c01a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959332384132995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vh8gg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbff43a-b4b7-4fce-8898-630b8ad5ab3e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a78648db20de6182de8323fe55bf66304a08df29615812edc781b6f181bda98,PodSandboxId:969ba7529116cf64276011f2f23003fd7101680654e01d6d
df2e6be3571e714b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959331560033506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ed7aa9b84562bf2b23cef2f8d68c11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212a7de66df56120f26702b7d4288eeb909fc5d28bef83eec75437e632a1cfa2,PodSandboxId:660c6f6820684b2803663e3ae14c3da038948fecc02fac3222f890
ad8d1962c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959331403180391,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13ab2db04965b7fbdf470dfbea33600c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1d139c8dd93ae59eb53ed1b75cacf6052fdfa08ab988f2a806088370223dd0,PodSandboxId:fa3caec20b349a9a46a785e085ab3b335
de0b473ec320006d7eb731c38acbf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959331281308081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97bf050bc1d8c52effe065b7bf80e5f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008c80c30cf67f6babdc10990eef1bdff506ebd2c0b40298813292b0cc269ebd,PodSandboxId:d316d6ebcab81373423414bed94841c87399f5b
dee4900240eb8fdcd3819ec2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959331155370390,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f9ccac77344020e79fc28d16b60036,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:285e5ee3c69a9ecc036b0e95fe246a25aff705e9f2394440563359ff587bada7,PodSandboxId:b4d147175f84abcb63ac9ee449e075c811765d428ba98b4b8dc8b36344911135,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959292806521535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2rvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67c002b2-c432-4972-a8d6-efe417f15e76,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4643e2f-a888-4bf6-bf72-86e4f85ee63f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.259631520Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7403d457-d689-43c5-8a99-38c6e692ba7a name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.259715611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7403d457-d689-43c5-8a99-38c6e692ba7a name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.260785095Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c297f5cf-f23a-40a0-8a43-158c29485db4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.261197355Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959368261173063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c297f5cf-f23a-40a0-8a43-158c29485db4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.261675684Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7f34a3d2-377e-49bc-8371-f2fda7fc7dd9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.262071678Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0f5a7d5dfdc0c174a8683cc40b7cfd2f714e9e4c995088c6f55929c47bd1c01a,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-vh8gg,Uid:dcbff43a-b4b7-4fce-8898-630b8ad5ab3e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724959331379327320,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-vh8gg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbff43a-b4b7-4fce-8898-630b8ad5ab3e,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:21:32.181049937Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88ce4d3a6dde8fa615685ecf1d8e745200e10f3fbdcd6b882027dfe82bcf1bd6,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-slhgn,Uid:094d8fa9-7273-4418-8c0b-8c0e626e84c9,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724959331282178136,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-slhgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094d8fa9-7273-4418-8c0b-8c0e626e84c9,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:21:32.212193341Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:969ba7529116cf64276011f2f23003fd7101680654e01d6ddf2e6be3571e714b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-353455,Uid:36ed7aa9b84562bf2b23cef2f8d68c11,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724959331077244912,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ed7aa9b84562bf2b23cef2f8d68c11,tier: control-plane,},Ann
otations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.102:8443,kubernetes.io/config.hash: 36ed7aa9b84562bf2b23cef2f8d68c11,kubernetes.io/config.seen: 2024-08-29T19:21:17.591346078Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:660c6f6820684b2803663e3ae14c3da038948fecc02fac3222f890ad8d1962c4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-353455,Uid:13ab2db04965b7fbdf470dfbea33600c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724959331027817398,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13ab2db04965b7fbdf470dfbea33600c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 13ab2db04965b7fbdf470dfbea33600c,kubernetes.io/config.seen: 2024-08-29T19:21:17.593697187Z,kubernetes.io/config
.source: file,},RuntimeHandler:,},&PodSandbox{Id:e407d6bfda9cb1f991b74fc75be6d1a0ab6384704d961d541121da55c6225591,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ea28c0f8-becd-4289-be1d-1a2ee7c649f3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724959330961842931,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea28c0f8-becd-4289-be1d-1a2ee7c649f3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v
5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-29T19:21:33.520243383Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ef556f9f6addd8d415ae8494b513c40fe38ceef6bcc01a247ceed6272bb3c11,Metadata:&PodSandboxMetadata{Name:kube-proxy-x2rvn,Uid:67c002b2-c432-4972-a8d6-efe417f15e76,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724959330895759421,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-x2rvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67c002b2-c432-4972-a8d6-efe417f15e76,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:2
1:32.285660283Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fa3caec20b349a9a46a785e085ab3b335de0b473ec320006d7eb731c38acbf65,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-353455,Uid:97bf050bc1d8c52effe065b7bf80e5f0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724959330889513035,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97bf050bc1d8c52effe065b7bf80e5f0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 97bf050bc1d8c52effe065b7bf80e5f0,kubernetes.io/config.seen: 2024-08-29T19:21:17.601284831Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d316d6ebcab81373423414bed94841c87399f5bdee4900240eb8fdcd3819ec2d,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-353455,Uid:c0f9ccac77344020e79fc28d16b60036,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1724959330878480675,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f9ccac77344020e79fc28d16b60036,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.102:2379,kubernetes.io/config.hash: c0f9ccac77344020e79fc28d16b60036,kubernetes.io/config.seen: 2024-08-29T19:21:17.662867957Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b4d147175f84abcb63ac9ee449e075c811765d428ba98b4b8dc8b36344911135,Metadata:&PodSandboxMetadata{Name:kube-proxy-x2rvn,Uid:67c002b2-c432-4972-a8d6-efe417f15e76,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724959292596283546,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-x2rvn,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 67c002b2-c432-4972-a8d6-efe417f15e76,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:21:32.285660283Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7f34a3d2-377e-49bc-8371-f2fda7fc7dd9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.262586797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85553f17-7d24-4127-bf74-84902a027afa name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.262636744Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85553f17-7d24-4127-bf74-84902a027afa name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.263088915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41cb33bf1862d18d2e61300ab7a03eed124f9e097b728b4dee9f3e0e07853bfa,PodSandboxId:88ce4d3a6dde8fa615685ecf1d8e745200e10f3fbdcd6b882027dfe82bcf1bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959364414449937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-slhgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094d8fa9-7273-4418-8c0b-8c0e626e84c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678563060dd8f85c5ee782b40fc67a20ff5a1e445b978e0d5ee0dd69816ccf6,PodSandboxId:0f5a7d5dfdc0c174a8683cc40b7cfd2f714e9e4c995088c6f55929c47bd1c01a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959364401211083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vh8gg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: dcbff43a-b4b7-4fce-8898-630b8ad5ab3e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f133f6463690dd15bba7da278412436fa872f343c5f722038dddaecdac15e72b,PodSandboxId:e407d6bfda9cb1f991b74fc75be6d1a0ab6384704d961d541121da55c6225591,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_EXITED,CreatedAt:1724959364385715113,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea28c0f8-becd-4289-be1d-1a2ee7c649f3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8a0223d040e31af5d2796409d8ef34a095691c2aa9b2b0ce5f71c41b22be9b,PodSandboxId:fa3caec20b349a9a46a785e085ab3b335de0b473ec320006d7eb731c38acbf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNIN
G,CreatedAt:1724959360594608755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97bf050bc1d8c52effe065b7bf80e5f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56faebaaa3d5a20d789c1ecedc89011c3f614ada0ccf0ae1c4a68f9cd223a687,PodSandboxId:969ba7529116cf64276011f2f23003fd7101680654e01d6ddf2e6be3571e714b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,Cre
atedAt:1724959360603480062,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ed7aa9b84562bf2b23cef2f8d68c11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b31107f20ba69a54e31e867ef89cbe5f24d4f61974b851674c736da276953753,PodSandboxId:660c6f6820684b2803663e3ae14c3da038948fecc02fac3222f890ad8d1962c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING
,CreatedAt:1724959360574678251,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13ab2db04965b7fbdf470dfbea33600c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6ab33b0957833fc60a863d02363c414c90412d3278071931f60c131d281816,PodSandboxId:d316d6ebcab81373423414bed94841c87399f5bdee4900240eb8fdcd3819ec2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNN
ING,CreatedAt:1724959360568718167,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f9ccac77344020e79fc28d16b60036,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e60e124253cbde693be91907407dcaf0037eae2683ada1014cf32c77523a00,PodSandboxId:9ef556f9f6addd8d415ae8494b513c40fe38ceef6bcc01a247ceed6272bb3c11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495933152822
7001,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2rvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67c002b2-c432-4972-a8d6-efe417f15e76,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7e0f44fc45d81ca34fca662b700345a22ba250ea849d66ac39f08b6e8a456f2,PodSandboxId:88ce4d3a6dde8fa615685ecf1d8e745200e10f3fbdcd6b882027dfe82bcf1bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959332526877093,Labels:map[string]string{io.kuber
netes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-slhgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094d8fa9-7273-4418-8c0b-8c0e626e84c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8cd73d6720d1b99801804e1b45a1e66c095ecc42c32b8e31be70e94ba9c4ac6,PodSandboxId:0f5a7d5dfdc0c174a8683cc40b7cfd2f714e9e4c995088c6f55929c47bd1c01a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959332384132995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vh8gg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbff43a-b4b7-4fce-8898-630b8ad5ab3e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a78648db20de6182de8323fe55bf66304a08df29615812edc781b6f181bda98,PodSandboxId:969ba7529116cf64276011f2f23003fd7101680654e01d6d
df2e6be3571e714b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959331560033506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ed7aa9b84562bf2b23cef2f8d68c11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212a7de66df56120f26702b7d4288eeb909fc5d28bef83eec75437e632a1cfa2,PodSandboxId:660c6f6820684b2803663e3ae14c3da038948fecc02fac3222f890
ad8d1962c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959331403180391,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13ab2db04965b7fbdf470dfbea33600c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1d139c8dd93ae59eb53ed1b75cacf6052fdfa08ab988f2a806088370223dd0,PodSandboxId:fa3caec20b349a9a46a785e085ab3b335
de0b473ec320006d7eb731c38acbf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959331281308081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97bf050bc1d8c52effe065b7bf80e5f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008c80c30cf67f6babdc10990eef1bdff506ebd2c0b40298813292b0cc269ebd,PodSandboxId:d316d6ebcab81373423414bed94841c87399f5b
dee4900240eb8fdcd3819ec2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959331155370390,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f9ccac77344020e79fc28d16b60036,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:285e5ee3c69a9ecc036b0e95fe246a25aff705e9f2394440563359ff587bada7,PodSandboxId:b4d147175f84abcb63ac9ee449e075c811765d428ba98b4b8dc8b36344911135,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959292806521535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2rvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67c002b2-c432-4972-a8d6-efe417f15e76,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85553f17-7d24-4127-bf74-84902a027afa name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.263152133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d7db118-e0f8-4703-b356-22aa6668f144 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.263481881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d7db118-e0f8-4703-b356-22aa6668f144 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.264364531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41cb33bf1862d18d2e61300ab7a03eed124f9e097b728b4dee9f3e0e07853bfa,PodSandboxId:88ce4d3a6dde8fa615685ecf1d8e745200e10f3fbdcd6b882027dfe82bcf1bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959364414449937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-slhgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094d8fa9-7273-4418-8c0b-8c0e626e84c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678563060dd8f85c5ee782b40fc67a20ff5a1e445b978e0d5ee0dd69816ccf6,PodSandboxId:0f5a7d5dfdc0c174a8683cc40b7cfd2f714e9e4c995088c6f55929c47bd1c01a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959364401211083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vh8gg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: dcbff43a-b4b7-4fce-8898-630b8ad5ab3e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f133f6463690dd15bba7da278412436fa872f343c5f722038dddaecdac15e72b,PodSandboxId:e407d6bfda9cb1f991b74fc75be6d1a0ab6384704d961d541121da55c6225591,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_EXITED,CreatedAt:1724959364385715113,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea28c0f8-becd-4289-be1d-1a2ee7c649f3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8a0223d040e31af5d2796409d8ef34a095691c2aa9b2b0ce5f71c41b22be9b,PodSandboxId:fa3caec20b349a9a46a785e085ab3b335de0b473ec320006d7eb731c38acbf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNIN
G,CreatedAt:1724959360594608755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97bf050bc1d8c52effe065b7bf80e5f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56faebaaa3d5a20d789c1ecedc89011c3f614ada0ccf0ae1c4a68f9cd223a687,PodSandboxId:969ba7529116cf64276011f2f23003fd7101680654e01d6ddf2e6be3571e714b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,Cre
atedAt:1724959360603480062,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ed7aa9b84562bf2b23cef2f8d68c11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b31107f20ba69a54e31e867ef89cbe5f24d4f61974b851674c736da276953753,PodSandboxId:660c6f6820684b2803663e3ae14c3da038948fecc02fac3222f890ad8d1962c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING
,CreatedAt:1724959360574678251,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13ab2db04965b7fbdf470dfbea33600c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6ab33b0957833fc60a863d02363c414c90412d3278071931f60c131d281816,PodSandboxId:d316d6ebcab81373423414bed94841c87399f5bdee4900240eb8fdcd3819ec2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNN
ING,CreatedAt:1724959360568718167,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f9ccac77344020e79fc28d16b60036,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e60e124253cbde693be91907407dcaf0037eae2683ada1014cf32c77523a00,PodSandboxId:9ef556f9f6addd8d415ae8494b513c40fe38ceef6bcc01a247ceed6272bb3c11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495933152822
7001,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2rvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67c002b2-c432-4972-a8d6-efe417f15e76,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7e0f44fc45d81ca34fca662b700345a22ba250ea849d66ac39f08b6e8a456f2,PodSandboxId:88ce4d3a6dde8fa615685ecf1d8e745200e10f3fbdcd6b882027dfe82bcf1bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959332526877093,Labels:map[string]string{io.kuber
netes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-slhgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094d8fa9-7273-4418-8c0b-8c0e626e84c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8cd73d6720d1b99801804e1b45a1e66c095ecc42c32b8e31be70e94ba9c4ac6,PodSandboxId:0f5a7d5dfdc0c174a8683cc40b7cfd2f714e9e4c995088c6f55929c47bd1c01a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959332384132995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vh8gg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbff43a-b4b7-4fce-8898-630b8ad5ab3e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a78648db20de6182de8323fe55bf66304a08df29615812edc781b6f181bda98,PodSandboxId:969ba7529116cf64276011f2f23003fd7101680654e01d6d
df2e6be3571e714b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959331560033506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ed7aa9b84562bf2b23cef2f8d68c11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212a7de66df56120f26702b7d4288eeb909fc5d28bef83eec75437e632a1cfa2,PodSandboxId:660c6f6820684b2803663e3ae14c3da038948fecc02fac3222f890
ad8d1962c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959331403180391,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13ab2db04965b7fbdf470dfbea33600c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1d139c8dd93ae59eb53ed1b75cacf6052fdfa08ab988f2a806088370223dd0,PodSandboxId:fa3caec20b349a9a46a785e085ab3b335
de0b473ec320006d7eb731c38acbf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959331281308081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97bf050bc1d8c52effe065b7bf80e5f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008c80c30cf67f6babdc10990eef1bdff506ebd2c0b40298813292b0cc269ebd,PodSandboxId:d316d6ebcab81373423414bed94841c87399f5b
dee4900240eb8fdcd3819ec2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959331155370390,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f9ccac77344020e79fc28d16b60036,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:285e5ee3c69a9ecc036b0e95fe246a25aff705e9f2394440563359ff587bada7,PodSandboxId:b4d147175f84abcb63ac9ee449e075c811765d428ba98b4b8dc8b36344911135,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959292806521535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2rvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67c002b2-c432-4972-a8d6-efe417f15e76,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d7db118-e0f8-4703-b356-22aa6668f144 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.300840144Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c4b7218-639a-47db-9858-1cf48beb53c9 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.301064963Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c4b7218-639a-47db-9858-1cf48beb53c9 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.302650302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b4bc755b-a953-4ccf-adb8-191b08dd6d57 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.303270547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959368303243830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4bc755b-a953-4ccf-adb8-191b08dd6d57 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.303926491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbe750dd-31a8-4864-a5c3-d1b97749f99b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.304004506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbe750dd-31a8-4864-a5c3-d1b97749f99b name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:48 kubernetes-upgrade-353455 crio[2245]: time="2024-08-29 19:22:48.304343860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41cb33bf1862d18d2e61300ab7a03eed124f9e097b728b4dee9f3e0e07853bfa,PodSandboxId:88ce4d3a6dde8fa615685ecf1d8e745200e10f3fbdcd6b882027dfe82bcf1bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959364414449937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-slhgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094d8fa9-7273-4418-8c0b-8c0e626e84c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678563060dd8f85c5ee782b40fc67a20ff5a1e445b978e0d5ee0dd69816ccf6,PodSandboxId:0f5a7d5dfdc0c174a8683cc40b7cfd2f714e9e4c995088c6f55929c47bd1c01a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959364401211083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vh8gg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: dcbff43a-b4b7-4fce-8898-630b8ad5ab3e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f133f6463690dd15bba7da278412436fa872f343c5f722038dddaecdac15e72b,PodSandboxId:e407d6bfda9cb1f991b74fc75be6d1a0ab6384704d961d541121da55c6225591,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_EXITED,CreatedAt:1724959364385715113,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea28c0f8-becd-4289-be1d-1a2ee7c649f3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae8a0223d040e31af5d2796409d8ef34a095691c2aa9b2b0ce5f71c41b22be9b,PodSandboxId:fa3caec20b349a9a46a785e085ab3b335de0b473ec320006d7eb731c38acbf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNIN
G,CreatedAt:1724959360594608755,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97bf050bc1d8c52effe065b7bf80e5f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56faebaaa3d5a20d789c1ecedc89011c3f614ada0ccf0ae1c4a68f9cd223a687,PodSandboxId:969ba7529116cf64276011f2f23003fd7101680654e01d6ddf2e6be3571e714b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,Cre
atedAt:1724959360603480062,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ed7aa9b84562bf2b23cef2f8d68c11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b31107f20ba69a54e31e867ef89cbe5f24d4f61974b851674c736da276953753,PodSandboxId:660c6f6820684b2803663e3ae14c3da038948fecc02fac3222f890ad8d1962c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING
,CreatedAt:1724959360574678251,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13ab2db04965b7fbdf470dfbea33600c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6ab33b0957833fc60a863d02363c414c90412d3278071931f60c131d281816,PodSandboxId:d316d6ebcab81373423414bed94841c87399f5bdee4900240eb8fdcd3819ec2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNN
ING,CreatedAt:1724959360568718167,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f9ccac77344020e79fc28d16b60036,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e60e124253cbde693be91907407dcaf0037eae2683ada1014cf32c77523a00,PodSandboxId:9ef556f9f6addd8d415ae8494b513c40fe38ceef6bcc01a247ceed6272bb3c11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172495933152822
7001,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2rvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67c002b2-c432-4972-a8d6-efe417f15e76,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7e0f44fc45d81ca34fca662b700345a22ba250ea849d66ac39f08b6e8a456f2,PodSandboxId:88ce4d3a6dde8fa615685ecf1d8e745200e10f3fbdcd6b882027dfe82bcf1bd6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959332526877093,Labels:map[string]string{io.kuber
netes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-slhgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 094d8fa9-7273-4418-8c0b-8c0e626e84c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8cd73d6720d1b99801804e1b45a1e66c095ecc42c32b8e31be70e94ba9c4ac6,PodSandboxId:0f5a7d5dfdc0c174a8683cc40b7cfd2f714e9e4c995088c6f55929c47bd1c01a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959332384132995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vh8gg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcbff43a-b4b7-4fce-8898-630b8ad5ab3e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a78648db20de6182de8323fe55bf66304a08df29615812edc781b6f181bda98,PodSandboxId:969ba7529116cf64276011f2f23003fd7101680654e01d6d
df2e6be3571e714b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959331560033506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ed7aa9b84562bf2b23cef2f8d68c11,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212a7de66df56120f26702b7d4288eeb909fc5d28bef83eec75437e632a1cfa2,PodSandboxId:660c6f6820684b2803663e3ae14c3da038948fecc02fac3222f890
ad8d1962c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959331403180391,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13ab2db04965b7fbdf470dfbea33600c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1d139c8dd93ae59eb53ed1b75cacf6052fdfa08ab988f2a806088370223dd0,PodSandboxId:fa3caec20b349a9a46a785e085ab3b335
de0b473ec320006d7eb731c38acbf65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959331281308081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97bf050bc1d8c52effe065b7bf80e5f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008c80c30cf67f6babdc10990eef1bdff506ebd2c0b40298813292b0cc269ebd,PodSandboxId:d316d6ebcab81373423414bed94841c87399f5b
dee4900240eb8fdcd3819ec2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959331155370390,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-353455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f9ccac77344020e79fc28d16b60036,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:285e5ee3c69a9ecc036b0e95fe246a25aff705e9f2394440563359ff587bada7,PodSandboxId:b4d147175f84abcb63ac9ee449e075c811765d428ba98b4b8dc8b36344911135,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959292806521535,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x2rvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67c002b2-c432-4972-a8d6-efe417f15e76,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbe750dd-31a8-4864-a5c3-d1b97749f99b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	41cb33bf1862d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   2                   88ce4d3a6dde8       coredns-6f6b679f8f-slhgn
	1678563060dd8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   2                   0f5a7d5dfdc0c       coredns-6f6b679f8f-vh8gg
	f133f6463690d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Exited              storage-provisioner       2                   e407d6bfda9cb       storage-provisioner
	56faebaaa3d5a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   7 seconds ago        Running             kube-apiserver            2                   969ba7529116c       kube-apiserver-kubernetes-upgrade-353455
	ae8a0223d040e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   7 seconds ago        Running             kube-scheduler            2                   fa3caec20b349       kube-scheduler-kubernetes-upgrade-353455
	b31107f20ba69       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   7 seconds ago        Running             kube-controller-manager   2                   660c6f6820684       kube-controller-manager-kubernetes-upgrade-353455
	db6ab33b09578       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago        Running             etcd                      2                   d316d6ebcab81       etcd-kubernetes-upgrade-353455
	c7e0f44fc45d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   35 seconds ago       Exited              coredns                   1                   88ce4d3a6dde8       coredns-6f6b679f8f-slhgn
	c8cd73d6720d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   35 seconds ago       Exited              coredns                   1                   0f5a7d5dfdc0c       coredns-6f6b679f8f-vh8gg
	1a78648db20de       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   36 seconds ago       Exited              kube-apiserver            1                   969ba7529116c       kube-apiserver-kubernetes-upgrade-353455
	c6e60e124253c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   36 seconds ago       Running             kube-proxy                1                   9ef556f9f6add       kube-proxy-x2rvn
	212a7de66df56       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   36 seconds ago       Exited              kube-controller-manager   1                   660c6f6820684       kube-controller-manager-kubernetes-upgrade-353455
	cf1d139c8dd93       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   37 seconds ago       Exited              kube-scheduler            1                   fa3caec20b349       kube-scheduler-kubernetes-upgrade-353455
	008c80c30cf67       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   37 seconds ago       Exited              etcd                      1                   d316d6ebcab81       etcd-kubernetes-upgrade-353455
	285e5ee3c69a9       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Exited              kube-proxy                0                   b4d147175f84a       kube-proxy-x2rvn
	
	
	==> coredns [1678563060dd8f85c5ee782b40fc67a20ff5a1e445b978e0d5ee0dd69816ccf6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [41cb33bf1862d18d2e61300ab7a03eed124f9e097b728b4dee9f3e0e07853bfa] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c7e0f44fc45d81ca34fca662b700345a22ba250ea849d66ac39f08b6e8a456f2] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c8cd73d6720d1b99801804e1b45a1e66c095ecc42c32b8e31be70e94ba9c4ac6] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-353455
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-353455
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:21:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-353455
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:22:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:22:43 +0000   Thu, 29 Aug 2024 19:21:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:22:43 +0000   Thu, 29 Aug 2024 19:21:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:22:43 +0000   Thu, 29 Aug 2024 19:21:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:22:43 +0000   Thu, 29 Aug 2024 19:21:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.102
	  Hostname:    kubernetes-upgrade-353455
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bfe5d58a5f944ff83be39208a581fce
	  System UUID:                9bfe5d58-a5f9-44ff-83be-39208a581fce
	  Boot ID:                    5d4335b6-6404-4f4f-9732-d4b68c6fc97c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-slhgn                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     76s
	  kube-system                 coredns-6f6b679f8f-vh8gg                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     76s
	  kube-system                 etcd-kubernetes-upgrade-353455                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         74s
	  kube-system                 kube-apiserver-kubernetes-upgrade-353455             250m (12%)    0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-353455    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-x2rvn                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-kubernetes-upgrade-353455             100m (5%)     0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 33s                kube-proxy       
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s (x8 over 91s)  kubelet          Node kubernetes-upgrade-353455 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x8 over 91s)  kubelet          Node kubernetes-upgrade-353455 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 91s)  kubelet          Node kubernetes-upgrade-353455 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           78s                node-controller  Node kubernetes-upgrade-353455 event: Registered Node kubernetes-upgrade-353455 in Controller
	  Normal  CIDRAssignmentFailed     78s                cidrAllocator    Node kubernetes-upgrade-353455 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           30s                node-controller  Node kubernetes-upgrade-353455 event: Registered Node kubernetes-upgrade-353455 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-353455 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-353455 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-353455 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-353455 event: Registered Node kubernetes-upgrade-353455 in Controller
	
	
	==> dmesg <==
	[Aug29 19:21] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.652751] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.056787] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060959] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.192239] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.131551] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.256026] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +3.981885] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +1.849928] systemd-fstab-generator[838]: Ignoring "noauto" option for root device
	[  +0.061461] kauditd_printk_skb: 158 callbacks suppressed
	[ +14.825419] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.101741] kauditd_printk_skb: 69 callbacks suppressed
	[Aug29 19:22] systemd-fstab-generator[2170]: Ignoring "noauto" option for root device
	[  +0.096217] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.062507] systemd-fstab-generator[2182]: Ignoring "noauto" option for root device
	[  +0.234358] systemd-fstab-generator[2196]: Ignoring "noauto" option for root device
	[  +0.156713] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[  +0.313101] systemd-fstab-generator[2236]: Ignoring "noauto" option for root device
	[  +2.114716] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +3.976069] kauditd_printk_skb: 228 callbacks suppressed
	[ +24.104772] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +4.644370] kauditd_printk_skb: 44 callbacks suppressed
	[  +1.231935] systemd-fstab-generator[3961]: Ignoring "noauto" option for root device
	
	
	==> etcd [008c80c30cf67f6babdc10990eef1bdff506ebd2c0b40298813292b0cc269ebd] <==
	{"level":"info","ts":"2024-08-29T19:22:13.204082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"412c11a895d830d7 received MsgPreVoteResp from 412c11a895d830d7 at term 2"}
	{"level":"info","ts":"2024-08-29T19:22:13.204096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"412c11a895d830d7 became candidate at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:13.204102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"412c11a895d830d7 received MsgVoteResp from 412c11a895d830d7 at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:13.204112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"412c11a895d830d7 became leader at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:13.204120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 412c11a895d830d7 elected leader 412c11a895d830d7 at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:13.209408Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"412c11a895d830d7","local-member-attributes":"{Name:kubernetes-upgrade-353455 ClientURLs:[https://192.168.50.102:2379]}","request-path":"/0/members/412c11a895d830d7/attributes","cluster-id":"600a52c08a1fb2d4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:22:13.209969Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:22:13.210953Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:13.219719Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.102:2379"}
	{"level":"info","ts":"2024-08-29T19:22:13.223768Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:22:13.224973Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:13.226557Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T19:22:13.227715Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:22:13.229029Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T19:22:27.827529Z","caller":"traceutil/trace.go:171","msg":"trace[2122694694] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"142.985291ms","start":"2024-08-29T19:22:27.684513Z","end":"2024-08-29T19:22:27.827499Z","steps":["trace[2122694694] 'process raft request'  (duration: 142.863428ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T19:22:38.420676Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-29T19:22:38.420743Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-353455","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.102:2380"],"advertise-client-urls":["https://192.168.50.102:2379"]}
	{"level":"warn","ts":"2024-08-29T19:22:38.420818Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T19:22:38.420846Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T19:22:38.421450Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.102:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T19:22:38.421532Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.102:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-29T19:22:38.421624Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"412c11a895d830d7","current-leader-member-id":"412c11a895d830d7"}
	{"level":"info","ts":"2024-08-29T19:22:38.424888Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.102:2380"}
	{"level":"info","ts":"2024-08-29T19:22:38.425084Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.102:2380"}
	{"level":"info","ts":"2024-08-29T19:22:38.425151Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-353455","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.102:2380"],"advertise-client-urls":["https://192.168.50.102:2379"]}
	
	
	==> etcd [db6ab33b0957833fc60a863d02363c414c90412d3278071931f60c131d281816] <==
	{"level":"info","ts":"2024-08-29T19:22:40.905557Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:22:40.905644Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:22:40.905690Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:22:40.904359Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:40.911261Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T19:22:40.911776Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"412c11a895d830d7","initial-advertise-peer-urls":["https://192.168.50.102:2380"],"listen-peer-urls":["https://192.168.50.102:2380"],"advertise-client-urls":["https://192.168.50.102:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.102:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:22:40.911849Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:22:40.912534Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.102:2380"}
	{"level":"info","ts":"2024-08-29T19:22:40.912560Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.102:2380"}
	{"level":"info","ts":"2024-08-29T19:22:41.480001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"412c11a895d830d7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:41.480047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"412c11a895d830d7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:41.480070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"412c11a895d830d7 received MsgPreVoteResp from 412c11a895d830d7 at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:41.480081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"412c11a895d830d7 became candidate at term 4"}
	{"level":"info","ts":"2024-08-29T19:22:41.480086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"412c11a895d830d7 received MsgVoteResp from 412c11a895d830d7 at term 4"}
	{"level":"info","ts":"2024-08-29T19:22:41.480095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"412c11a895d830d7 became leader at term 4"}
	{"level":"info","ts":"2024-08-29T19:22:41.480102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 412c11a895d830d7 elected leader 412c11a895d830d7 at term 4"}
	{"level":"info","ts":"2024-08-29T19:22:41.486778Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"412c11a895d830d7","local-member-attributes":"{Name:kubernetes-upgrade-353455 ClientURLs:[https://192.168.50.102:2379]}","request-path":"/0/members/412c11a895d830d7/attributes","cluster-id":"600a52c08a1fb2d4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:22:41.486953Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:22:41.487765Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:41.488521Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T19:22:41.488781Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:22:41.490985Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:41.491063Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:22:41.491106Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T19:22:41.491679Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.102:2379"}
	
	
	==> kernel <==
	 19:22:48 up 1 min,  0 users,  load average: 0.86, 0.37, 0.14
	Linux kubernetes-upgrade-353455 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1a78648db20de6182de8323fe55bf66304a08df29615812edc781b6f181bda98] <==
	I0829 19:22:28.054361       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0829 19:22:28.054604       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0829 19:22:28.054735       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I0829 19:22:28.054776       1 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0829 19:22:28.054798       1 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController
	I0829 19:22:28.054812       1 establishing_controller.go:92] Shutting down EstablishingController
	I0829 19:22:28.054844       1 naming_controller.go:305] Shutting down NamingConditionController
	I0829 19:22:28.054874       1 controller.go:120] Shutting down OpenAPI V3 controller
	I0829 19:22:28.054932       1 controller.go:170] Shutting down OpenAPI controller
	I0829 19:22:28.055414       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0829 19:22:28.055524       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0829 19:22:28.056123       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0829 19:22:28.056934       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0829 19:22:28.056985       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0829 19:22:28.057004       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0829 19:22:28.057054       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0829 19:22:28.057087       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0829 19:22:28.057293       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0829 19:22:28.057556       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0829 19:22:28.060120       1 controller.go:157] Shutting down quota evaluator
	I0829 19:22:28.060147       1 controller.go:176] quota evaluator worker shutdown
	I0829 19:22:28.060581       1 controller.go:176] quota evaluator worker shutdown
	I0829 19:22:28.060608       1 controller.go:176] quota evaluator worker shutdown
	I0829 19:22:28.060616       1 controller.go:176] quota evaluator worker shutdown
	I0829 19:22:28.060621       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [56faebaaa3d5a20d789c1ecedc89011c3f614ada0ccf0ae1c4a68f9cd223a687] <==
	I0829 19:22:43.548316       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0829 19:22:43.548560       1 aggregator.go:171] initial CRD sync complete...
	I0829 19:22:43.548597       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 19:22:43.548607       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 19:22:43.548614       1 cache.go:39] Caches are synced for autoregister controller
	I0829 19:22:43.571289       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0829 19:22:43.606554       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 19:22:43.606631       1 policy_source.go:224] refreshing policies
	I0829 19:22:43.638095       1 shared_informer.go:320] Caches are synced for configmaps
	I0829 19:22:43.638187       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0829 19:22:43.638547       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0829 19:22:43.639091       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 19:22:43.641681       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 19:22:43.641708       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 19:22:43.644409       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0829 19:22:43.653361       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0829 19:22:44.459280       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0829 19:22:44.853248       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.102]
	I0829 19:22:44.855260       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 19:22:44.865466       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0829 19:22:45.421179       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 19:22:45.436460       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 19:22:45.481685       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 19:22:45.523550       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0829 19:22:45.539091       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [212a7de66df56120f26702b7d4288eeb909fc5d28bef83eec75437e632a1cfa2] <==
	I0829 19:22:18.708946       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0829 19:22:18.712265       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0829 19:22:18.717168       1 shared_informer.go:320] Caches are synced for taint
	I0829 19:22:18.717344       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0829 19:22:18.717420       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-353455"
	I0829 19:22:18.717451       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0829 19:22:18.720600       1 shared_informer.go:320] Caches are synced for deployment
	I0829 19:22:18.721319       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0829 19:22:18.760090       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 19:22:18.769982       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0829 19:22:18.770198       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-353455"
	I0829 19:22:18.819508       1 shared_informer.go:320] Caches are synced for service account
	I0829 19:22:18.819622       1 shared_informer.go:320] Caches are synced for persistent volume
	I0829 19:22:18.876678       1 shared_informer.go:320] Caches are synced for namespace
	I0829 19:22:18.920367       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0829 19:22:18.920503       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0829 19:22:18.920523       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0829 19:22:18.920596       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0829 19:22:18.928172       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0829 19:22:19.294578       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 19:22:19.294604       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0829 19:22:19.335491       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 19:22:22.537002       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="33.380454ms"
	I0829 19:22:22.541378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="148.943µs"
	I0829 19:22:23.256279       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="48.573µs"
	
	
	==> kube-controller-manager [b31107f20ba69a54e31e867ef89cbe5f24d4f61974b851674c736da276953753] <==
	I0829 19:22:46.871351       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="65.794µs"
	I0829 19:22:46.871194       1 shared_informer.go:320] Caches are synced for cronjob
	I0829 19:22:46.871206       1 shared_informer.go:320] Caches are synced for endpoint
	I0829 19:22:46.871219       1 shared_informer.go:320] Caches are synced for GC
	I0829 19:22:46.874532       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0829 19:22:46.875651       1 shared_informer.go:320] Caches are synced for expand
	I0829 19:22:46.876370       1 shared_informer.go:320] Caches are synced for taint
	I0829 19:22:46.876875       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0829 19:22:46.877551       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-353455"
	I0829 19:22:46.877638       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0829 19:22:46.877470       1 shared_informer.go:320] Caches are synced for stateful set
	I0829 19:22:46.879884       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0829 19:22:46.879960       1 shared_informer.go:320] Caches are synced for disruption
	I0829 19:22:46.882365       1 shared_informer.go:320] Caches are synced for service account
	I0829 19:22:46.885958       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0829 19:22:46.897885       1 shared_informer.go:320] Caches are synced for persistent volume
	I0829 19:22:46.906500       1 shared_informer.go:320] Caches are synced for attach detach
	I0829 19:22:47.032143       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0829 19:22:47.079278       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 19:22:47.082940       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0829 19:22:47.083048       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-353455"
	I0829 19:22:47.096710       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 19:22:47.511184       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 19:22:47.511230       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0829 19:22:47.516401       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [285e5ee3c69a9ecc036b0e95fe246a25aff705e9f2394440563359ff587bada7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:21:33.455461       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:21:33.494833       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.102"]
	E0829 19:21:33.494937       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:21:33.566875       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:21:33.567057       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:21:33.567098       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:21:33.570072       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:21:33.570344       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:21:33.570367       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:21:33.571871       1 config.go:197] "Starting service config controller"
	I0829 19:21:33.571955       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:21:33.571986       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:21:33.571990       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:21:33.572473       1 config.go:326] "Starting node config controller"
	I0829 19:21:33.572506       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:21:33.672471       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:21:33.672557       1 shared_informer.go:320] Caches are synced for node config
	I0829 19:21:33.672515       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c6e60e124253cbde693be91907407dcaf0037eae2683ada1014cf32c77523a00] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:22:14.226025       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:22:15.404698       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.102"]
	E0829 19:22:15.425407       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:22:15.509199       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:22:15.509313       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:22:15.509370       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:22:15.518050       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:22:15.519366       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:22:15.519460       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:22:15.523138       1 config.go:197] "Starting service config controller"
	I0829 19:22:15.523242       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:22:15.523295       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:22:15.523319       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:22:15.526119       1 config.go:326] "Starting node config controller"
	I0829 19:22:15.527479       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:22:15.628984       1 shared_informer.go:320] Caches are synced for node config
	I0829 19:22:15.629032       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:22:15.629058       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ae8a0223d040e31af5d2796409d8ef34a095691c2aa9b2b0ce5f71c41b22be9b] <==
	I0829 19:22:41.454857       1 serving.go:386] Generated self-signed cert in-memory
	W0829 19:22:43.538164       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 19:22:43.538243       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 19:22:43.538254       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 19:22:43.538263       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 19:22:43.568305       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 19:22:43.568350       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:22:43.570704       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 19:22:43.570833       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 19:22:43.570981       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 19:22:43.571098       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 19:22:43.671215       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cf1d139c8dd93ae59eb53ed1b75cacf6052fdfa08ab988f2a806088370223dd0] <==
	I0829 19:22:13.301424       1 serving.go:386] Generated self-signed cert in-memory
	W0829 19:22:15.359822       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 19:22:15.359885       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 19:22:15.360069       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 19:22:15.360082       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 19:22:15.397483       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 19:22:15.397515       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:22:15.400928       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 19:22:15.401006       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 19:22:15.402188       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 19:22:15.402264       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 19:22:15.501754       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 19:22:38.385186       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0829 19:22:38.385340       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0829 19:22:38.385496       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 29 19:22:40 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:40.440072    3513 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-353455"
	Aug 29 19:22:40 kubernetes-upgrade-353455 kubelet[3513]: E0829 19:22:40.440839    3513 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.102:8443: connect: connection refused" node="kubernetes-upgrade-353455"
	Aug 29 19:22:40 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:40.548372    3513 scope.go:117] "RemoveContainer" containerID="008c80c30cf67f6babdc10990eef1bdff506ebd2c0b40298813292b0cc269ebd"
	Aug 29 19:22:40 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:40.549055    3513 scope.go:117] "RemoveContainer" containerID="1a78648db20de6182de8323fe55bf66304a08df29615812edc781b6f181bda98"
	Aug 29 19:22:40 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:40.551186    3513 scope.go:117] "RemoveContainer" containerID="212a7de66df56120f26702b7d4288eeb909fc5d28bef83eec75437e632a1cfa2"
	Aug 29 19:22:40 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:40.551374    3513 scope.go:117] "RemoveContainer" containerID="cf1d139c8dd93ae59eb53ed1b75cacf6052fdfa08ab988f2a806088370223dd0"
	Aug 29 19:22:40 kubernetes-upgrade-353455 kubelet[3513]: E0829 19:22:40.671386    3513 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-353455?timeout=10s\": dial tcp 192.168.50.102:8443: connect: connection refused" interval="800ms"
	Aug 29 19:22:40 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:40.842304    3513 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-353455"
	Aug 29 19:22:40 kubernetes-upgrade-353455 kubelet[3513]: E0829 19:22:40.843194    3513 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.102:8443: connect: connection refused" node="kubernetes-upgrade-353455"
	Aug 29 19:22:41 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:41.645474    3513 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-353455"
	Aug 29 19:22:43 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:43.710063    3513 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-353455"
	Aug 29 19:22:43 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:43.710249    3513 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-353455"
	Aug 29 19:22:43 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:43.710285    3513 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 29 19:22:43 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:43.711603    3513 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 29 19:22:44 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:44.049097    3513 apiserver.go:52] "Watching apiserver"
	Aug 29 19:22:44 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:44.064922    3513 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 29 19:22:44 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:44.135173    3513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67c002b2-c432-4972-a8d6-efe417f15e76-xtables-lock\") pod \"kube-proxy-x2rvn\" (UID: \"67c002b2-c432-4972-a8d6-efe417f15e76\") " pod="kube-system/kube-proxy-x2rvn"
	Aug 29 19:22:44 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:44.135398    3513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ea28c0f8-becd-4289-be1d-1a2ee7c649f3-tmp\") pod \"storage-provisioner\" (UID: \"ea28c0f8-becd-4289-be1d-1a2ee7c649f3\") " pod="kube-system/storage-provisioner"
	Aug 29 19:22:44 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:44.135486    3513 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67c002b2-c432-4972-a8d6-efe417f15e76-lib-modules\") pod \"kube-proxy-x2rvn\" (UID: \"67c002b2-c432-4972-a8d6-efe417f15e76\") " pod="kube-system/kube-proxy-x2rvn"
	Aug 29 19:22:44 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:44.354241    3513 scope.go:117] "RemoveContainer" containerID="c7e0f44fc45d81ca34fca662b700345a22ba250ea849d66ac39f08b6e8a456f2"
	Aug 29 19:22:44 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:44.354835    3513 scope.go:117] "RemoveContainer" containerID="c8cd73d6720d1b99801804e1b45a1e66c095ecc42c32b8e31be70e94ba9c4ac6"
	Aug 29 19:22:44 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:44.355179    3513 scope.go:117] "RemoveContainer" containerID="d390a833f84a3199f7a1e4020b262916b76f50a210ff2ee2a9ab18fd2786fc5d"
	Aug 29 19:22:45 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:45.247681    3513 scope.go:117] "RemoveContainer" containerID="d390a833f84a3199f7a1e4020b262916b76f50a210ff2ee2a9ab18fd2786fc5d"
	Aug 29 19:22:45 kubernetes-upgrade-353455 kubelet[3513]: I0829 19:22:45.248005    3513 scope.go:117] "RemoveContainer" containerID="f133f6463690dd15bba7da278412436fa872f343c5f722038dddaecdac15e72b"
	Aug 29 19:22:45 kubernetes-upgrade-353455 kubelet[3513]: E0829 19:22:45.248127    3513 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea28c0f8-becd-4289-be1d-1a2ee7c649f3)\"" pod="kube-system/storage-provisioner" podUID="ea28c0f8-becd-4289-be1d-1a2ee7c649f3"
	
	
	==> storage-provisioner [f133f6463690dd15bba7da278412436fa872f343c5f722038dddaecdac15e72b] <==
	I0829 19:22:44.528541       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0829 19:22:44.531826       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:22:47.809002   64901 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19531-13056/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-353455 -n kubernetes-upgrade-353455
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-353455 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-353455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-353455
--- FAIL: TestKubernetesUpgrade (430.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.2s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-518621 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-518621 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.238959936s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-518621] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-518621" primary control-plane node in "pause-518621" cluster
	* Updating the running kvm2 "pause-518621" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-518621" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:22:06.543047   64307 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:22:06.543136   64307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:22:06.543143   64307 out.go:358] Setting ErrFile to fd 2...
	I0829 19:22:06.543147   64307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:22:06.543333   64307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:22:06.543919   64307 out.go:352] Setting JSON to false
	I0829 19:22:06.544855   64307 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7474,"bootTime":1724951853,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:22:06.544918   64307 start.go:139] virtualization: kvm guest
	I0829 19:22:06.547020   64307 out.go:177] * [pause-518621] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:22:06.548267   64307 notify.go:220] Checking for updates...
	I0829 19:22:06.548290   64307 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:22:06.549457   64307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:22:06.550545   64307 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:22:06.551572   64307 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:22:06.552629   64307 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:22:06.553879   64307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:22:06.555449   64307 config.go:182] Loaded profile config "pause-518621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:22:06.556008   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:06.556072   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:06.571521   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I0829 19:22:06.572004   64307 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:06.572569   64307 main.go:141] libmachine: Using API Version  1
	I0829 19:22:06.572593   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:06.572979   64307 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:06.573186   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:06.573422   64307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:22:06.573774   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:06.573811   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:06.588552   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0829 19:22:06.589111   64307 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:06.589660   64307 main.go:141] libmachine: Using API Version  1
	I0829 19:22:06.589684   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:06.590034   64307 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:06.590283   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:06.626910   64307 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:22:06.628114   64307 start.go:297] selected driver: kvm2
	I0829 19:22:06.628134   64307 start.go:901] validating driver "kvm2" against &{Name:pause-518621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-518621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:06.628330   64307 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:22:06.628800   64307 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:22:06.628902   64307 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:22:06.644356   64307 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:22:06.645155   64307 cni.go:84] Creating CNI manager for ""
	I0829 19:22:06.645172   64307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:22:06.645238   64307 start.go:340] cluster config:
	{Name:pause-518621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-518621 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:06.645392   64307 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:22:06.647209   64307 out.go:177] * Starting "pause-518621" primary control-plane node in "pause-518621" cluster
	I0829 19:22:06.648577   64307 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:22:06.648622   64307 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:22:06.648630   64307 cache.go:56] Caching tarball of preloaded images
	I0829 19:22:06.648726   64307 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:22:06.648739   64307 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:22:06.648910   64307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/pause-518621/config.json ...
	I0829 19:22:06.649147   64307 start.go:360] acquireMachinesLock for pause-518621: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:22:08.347255   64307 start.go:364] duration metric: took 1.698077985s to acquireMachinesLock for "pause-518621"
	I0829 19:22:08.347323   64307 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:22:08.347332   64307 fix.go:54] fixHost starting: 
	I0829 19:22:08.347776   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:08.347825   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:08.368493   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0829 19:22:08.368962   64307 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:08.369484   64307 main.go:141] libmachine: Using API Version  1
	I0829 19:22:08.369509   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:08.369874   64307 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:08.370063   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:08.370236   64307 main.go:141] libmachine: (pause-518621) Calling .GetState
	I0829 19:22:08.371946   64307 fix.go:112] recreateIfNeeded on pause-518621: state=Running err=<nil>
	W0829 19:22:08.371976   64307 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:22:08.550441   64307 out.go:177] * Updating the running kvm2 "pause-518621" VM ...
	I0829 19:22:08.722026   64307 machine.go:93] provisionDockerMachine start ...
	I0829 19:22:08.722069   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:08.722439   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:08.726052   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.726492   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:08.726524   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.726729   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:08.726937   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.727124   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.727292   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:08.727554   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.727786   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:08.727802   64307 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:22:08.843036   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-518621
	
	I0829 19:22:08.843068   64307 main.go:141] libmachine: (pause-518621) Calling .GetMachineName
	I0829 19:22:08.843321   64307 buildroot.go:166] provisioning hostname "pause-518621"
	I0829 19:22:08.843350   64307 main.go:141] libmachine: (pause-518621) Calling .GetMachineName
	I0829 19:22:08.843539   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:08.846965   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.847413   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:08.847437   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.847621   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:08.847834   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.847964   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.848145   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:08.848330   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.848533   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:08.848548   64307 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-518621 && echo "pause-518621" | sudo tee /etc/hostname
	I0829 19:22:08.976480   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-518621
	
	I0829 19:22:08.976511   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:08.979685   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.980082   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:08.980117   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.980399   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:08.980639   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.980819   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.980959   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:08.981169   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.981413   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:08.981469   64307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-518621' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-518621/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-518621' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:22:09.083539   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:22:09.083574   64307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:22:09.083617   64307 buildroot.go:174] setting up certificates
	I0829 19:22:09.083631   64307 provision.go:84] configureAuth start
	I0829 19:22:09.083641   64307 main.go:141] libmachine: (pause-518621) Calling .GetMachineName
	I0829 19:22:09.083917   64307 main.go:141] libmachine: (pause-518621) Calling .GetIP
	I0829 19:22:09.086993   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.087524   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.087577   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.087752   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:09.090258   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.090527   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.090555   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.090675   64307 provision.go:143] copyHostCerts
	I0829 19:22:09.090733   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:22:09.090746   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:22:09.162320   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:22:09.162489   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:22:09.162502   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:22:09.162543   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:22:09.162620   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:22:09.162629   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:22:09.162660   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:22:09.162723   64307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.pause-518621 san=[127.0.0.1 192.168.61.203 localhost minikube pause-518621]
	I0829 19:22:09.520291   64307 provision.go:177] copyRemoteCerts
	I0829 19:22:09.520373   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:22:09.520413   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:09.523620   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.523990   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.524022   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.524271   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:09.524511   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:09.524733   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:09.524894   64307 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/pause-518621/id_rsa Username:docker}
	I0829 19:22:09.611312   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:22:09.639602   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:22:09.669692   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0829 19:22:09.705901   64307 provision.go:87] duration metric: took 622.256236ms to configureAuth
	I0829 19:22:09.705938   64307 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:22:09.706215   64307 config.go:182] Loaded profile config "pause-518621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:22:09.706332   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:09.709310   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.709726   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.709758   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.709943   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:09.710159   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:09.710330   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:09.710539   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:09.710714   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:09.710910   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:09.710932   64307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:22:15.228194   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:22:15.228226   64307 machine.go:96] duration metric: took 6.506174181s to provisionDockerMachine
	I0829 19:22:15.228239   64307 start.go:293] postStartSetup for "pause-518621" (driver="kvm2")
	I0829 19:22:15.228251   64307 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:22:15.228272   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:15.228583   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:22:15.228618   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:15.231726   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:15.232110   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:15.232140   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:15.232344   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:15.232548   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:15.232706   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:15.232874   64307 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/pause-518621/id_rsa Username:docker}
	I0829 19:22:15.318773   64307 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:22:15.323125   64307 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:22:15.323155   64307 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:22:15.323229   64307 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:22:15.323326   64307 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:22:15.323448   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:22:15.333787   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:22:15.361050   64307 start.go:296] duration metric: took 132.79722ms for postStartSetup
	I0829 19:22:15.361097   64307 fix.go:56] duration metric: took 7.013765315s for fixHost
	I0829 19:22:15.361123   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:15.364797   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:15.365281   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:15.365311   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:15.365590   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:15.365812   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:15.366016   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:15.366215   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:15.366426   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:15.366640   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:15.366659   64307 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:22:15.475615   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959335.465543363
	
	I0829 19:22:15.475644   64307 fix.go:216] guest clock: 1724959335.465543363
	I0829 19:22:15.475656   64307 fix.go:229] Guest: 2024-08-29 19:22:15.465543363 +0000 UTC Remote: 2024-08-29 19:22:15.361102853 +0000 UTC m=+8.856097471 (delta=104.44051ms)
	I0829 19:22:15.475714   64307 fix.go:200] guest clock delta is within tolerance: 104.44051ms
	I0829 19:22:15.475726   64307 start.go:83] releasing machines lock for "pause-518621", held for 7.128424912s
	I0829 19:22:15.475757   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:15.476044   64307 main.go:141] libmachine: (pause-518621) Calling .GetIP
	I0829 19:22:15.479674   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:15.480101   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:15.480124   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:15.480336   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:15.481012   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:15.481251   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:15.481342   64307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:22:15.481396   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:15.481479   64307 ssh_runner.go:195] Run: cat /version.json
	I0829 19:22:15.481497   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:15.484736   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:15.485016   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:15.485211   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:15.485244   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:15.485409   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:15.485629   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:15.485642   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:15.485658   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:15.485799   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:15.485913   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:15.485989   64307 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/pause-518621/id_rsa Username:docker}
	I0829 19:22:15.486038   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:15.486180   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:15.486303   64307 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/pause-518621/id_rsa Username:docker}
	I0829 19:22:15.601227   64307 ssh_runner.go:195] Run: systemctl --version
	I0829 19:22:15.609687   64307 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:22:15.775908   64307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:22:15.782793   64307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:22:15.782873   64307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:22:15.793926   64307 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 19:22:15.793949   64307 start.go:495] detecting cgroup driver to use...
	I0829 19:22:15.794009   64307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:22:15.842498   64307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:22:15.901019   64307 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:22:15.901082   64307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:22:15.945464   64307 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:22:16.102746   64307 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:22:16.363516   64307 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:22:16.681193   64307 docker.go:233] disabling docker service ...
	I0829 19:22:16.681277   64307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:22:16.725544   64307 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:22:16.763769   64307 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:22:17.047283   64307 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:22:17.277553   64307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:22:17.304240   64307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:22:17.335240   64307 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:22:17.335309   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:17.352964   64307 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:22:17.353045   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:17.366130   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:17.379748   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:17.390728   64307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:22:17.405448   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:17.418180   64307 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:17.449155   64307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:17.471666   64307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:22:17.490311   64307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:22:17.514240   64307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:22:17.732894   64307 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:22:18.222348   64307 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:22:18.222538   64307 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:22:18.230261   64307 start.go:563] Will wait 60s for crictl version
	I0829 19:22:18.230332   64307 ssh_runner.go:195] Run: which crictl
	I0829 19:22:18.245822   64307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:22:18.398615   64307 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:22:18.398710   64307 ssh_runner.go:195] Run: crio --version
	I0829 19:22:18.512203   64307 ssh_runner.go:195] Run: crio --version
	I0829 19:22:18.744540   64307 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:22:18.745624   64307 main.go:141] libmachine: (pause-518621) Calling .GetIP
	I0829 19:22:18.749472   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:18.749981   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:18.750013   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:18.750351   64307 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 19:22:18.760897   64307 kubeadm.go:883] updating cluster {Name:pause-518621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-518621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:22:18.761065   64307 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:22:18.761134   64307 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:22:18.856336   64307 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:22:18.856366   64307 crio.go:433] Images already preloaded, skipping extraction
	I0829 19:22:18.856439   64307 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:22:18.912685   64307 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:22:18.912714   64307 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:22:18.912724   64307 kubeadm.go:934] updating node { 192.168.61.203 8443 v1.31.0 crio true true} ...
	I0829 19:22:18.912963   64307 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-518621 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-518621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:22:18.913047   64307 ssh_runner.go:195] Run: crio config
	I0829 19:22:18.993270   64307 cni.go:84] Creating CNI manager for ""
	I0829 19:22:18.993297   64307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:22:18.993316   64307 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:22:18.993343   64307 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.203 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-518621 NodeName:pause-518621 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:22:18.993504   64307 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-518621"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:22:18.993582   64307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:22:19.012687   64307 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:22:19.012768   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:22:19.022201   64307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0829 19:22:19.039004   64307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:22:19.057082   64307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0829 19:22:19.075083   64307 ssh_runner.go:195] Run: grep 192.168.61.203	control-plane.minikube.internal$ /etc/hosts
	I0829 19:22:19.078824   64307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:22:19.234156   64307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:22:19.256005   64307 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/pause-518621 for IP: 192.168.61.203
	I0829 19:22:19.256029   64307 certs.go:194] generating shared ca certs ...
	I0829 19:22:19.256049   64307 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:19.256235   64307 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:22:19.256299   64307 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:22:19.256314   64307 certs.go:256] generating profile certs ...
	I0829 19:22:19.256434   64307 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/pause-518621/client.key
	I0829 19:22:19.256517   64307 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/pause-518621/apiserver.key.543ed734
	I0829 19:22:19.256566   64307 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/pause-518621/proxy-client.key
	I0829 19:22:19.256709   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:22:19.256754   64307 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:22:19.256767   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:22:19.256807   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:22:19.256842   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:22:19.256874   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:22:19.256930   64307 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:22:19.257644   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:22:19.282440   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:22:19.308128   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:22:19.332207   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:22:19.361343   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/pause-518621/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 19:22:19.420707   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/pause-518621/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 19:22:19.443536   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/pause-518621/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:22:19.466063   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/pause-518621/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:22:19.490195   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:22:19.511392   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:22:19.534548   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:22:19.557641   64307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:22:19.573403   64307 ssh_runner.go:195] Run: openssl version
	I0829 19:22:19.579019   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:22:19.590345   64307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:22:19.594827   64307 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:22:19.594887   64307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:22:19.600549   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:22:19.609787   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:22:19.622480   64307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:22:19.626900   64307 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:22:19.626945   64307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:22:19.632819   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:22:19.643647   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:22:19.654796   64307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:19.659064   64307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:19.659115   64307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:19.664850   64307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:22:19.674534   64307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:22:19.678636   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:22:19.683951   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:22:19.691308   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:22:19.697007   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:22:19.703051   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:22:19.709747   64307 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:22:19.716114   64307 kubeadm.go:392] StartCluster: {Name:pause-518621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-518621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:19.716261   64307 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:22:19.716325   64307 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:22:19.763804   64307 cri.go:89] found id: "a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4"
	I0829 19:22:19.763828   64307 cri.go:89] found id: "bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97"
	I0829 19:22:19.763834   64307 cri.go:89] found id: "cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee"
	I0829 19:22:19.763840   64307 cri.go:89] found id: "03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c"
	I0829 19:22:19.763844   64307 cri.go:89] found id: "fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50"
	I0829 19:22:19.763849   64307 cri.go:89] found id: "791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6"
	I0829 19:22:19.763855   64307 cri.go:89] found id: ""
	I0829 19:22:19.763904   64307 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-518621 -n pause-518621
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-518621 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-518621 logs -n 25: (1.342414973s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-633326 sudo cat                  | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                      | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | cri-dockerd --version                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                      | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | systemctl status containerd                |                           |         |         |                     |                     |
	|         | --all --full --no-pager                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                      | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | systemctl cat containerd                   |                           |         |         |                     |                     |
	|         | --no-pager                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo cat                  | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | /lib/systemd/system/containerd.service     |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo cat                  | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | /etc/containerd/config.toml                |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                      | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | containerd config dump                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                      | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | systemctl status crio --all                |                           |         |         |                     |                     |
	|         | --full --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                      | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | systemctl cat crio --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo find                 | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | /etc/crio -type f -exec sh -c              |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo crio                 | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | config                                     |                           |         |         |                     |                     |
	| delete  | -p cilium-633326                           | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC | 29 Aug 24 19:19 UTC |
	| start   | -p cert-options-034564                     | cert-options-034564       | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC | 29 Aug 24 19:21 UTC |
	|         | --memory=2048                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15              |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com           |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                      |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-353455               | kubernetes-upgrade-353455 | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:20 UTC |
	| start   | -p kubernetes-upgrade-353455               | kubernetes-upgrade-353455 | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:21 UTC |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-523972 ssh cat          | force-systemd-flag-523972 | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:20 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf         |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-523972               | force-systemd-flag-523972 | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:20 UTC |
	| start   | -p pause-518621 --memory=2048              | pause-518621              | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:22 UTC |
	|         | --install-addons=false                     |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| ssh     | cert-options-034564 ssh                    | cert-options-034564       | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC | 29 Aug 24 19:21 UTC |
	|         | openssl x509 -text -noout -in              |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt      |                           |         |         |                     |                     |
	| ssh     | -p cert-options-034564 -- sudo             | cert-options-034564       | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC | 29 Aug 24 19:21 UTC |
	|         | cat /etc/kubernetes/admin.conf             |                           |         |         |                     |                     |
	| delete  | -p cert-options-034564                     | cert-options-034564       | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC | 29 Aug 24 19:21 UTC |
	| start   | -p auto-633326 --memory=3072               | auto-633326               | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC |                     |
	|         | --alsologtostderr --wait=true              |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-353455               | kubernetes-upgrade-353455 | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-353455               | kubernetes-upgrade-353455 | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p pause-518621                            | pause-518621              | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:22:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:22:06.543047   64307 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:22:06.543136   64307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:22:06.543143   64307 out.go:358] Setting ErrFile to fd 2...
	I0829 19:22:06.543147   64307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:22:06.543333   64307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:22:06.543919   64307 out.go:352] Setting JSON to false
	I0829 19:22:06.544855   64307 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7474,"bootTime":1724951853,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:22:06.544918   64307 start.go:139] virtualization: kvm guest
	I0829 19:22:06.547020   64307 out.go:177] * [pause-518621] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:22:06.548267   64307 notify.go:220] Checking for updates...
	I0829 19:22:06.548290   64307 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:22:06.549457   64307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:22:06.550545   64307 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:22:06.551572   64307 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:22:06.552629   64307 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:22:06.553879   64307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:22:06.555449   64307 config.go:182] Loaded profile config "pause-518621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:22:06.556008   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:06.556072   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:06.571521   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I0829 19:22:06.572004   64307 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:06.572569   64307 main.go:141] libmachine: Using API Version  1
	I0829 19:22:06.572593   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:06.572979   64307 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:06.573186   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:06.573422   64307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:22:06.573774   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:06.573811   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:06.588552   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0829 19:22:06.589111   64307 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:06.589660   64307 main.go:141] libmachine: Using API Version  1
	I0829 19:22:06.589684   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:06.590034   64307 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:06.590283   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:06.626910   64307 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:22:06.628114   64307 start.go:297] selected driver: kvm2
	I0829 19:22:06.628134   64307 start.go:901] validating driver "kvm2" against &{Name:pause-518621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-518621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:06.628330   64307 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:22:06.628800   64307 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:22:06.628902   64307 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:22:06.644356   64307 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:22:06.645155   64307 cni.go:84] Creating CNI manager for ""
	I0829 19:22:06.645172   64307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:22:06.645238   64307 start.go:340] cluster config:
	{Name:pause-518621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-518621 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:06.645392   64307 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:22:06.647209   64307 out.go:177] * Starting "pause-518621" primary control-plane node in "pause-518621" cluster
	I0829 19:22:06.648577   64307 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:22:06.648622   64307 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:22:06.648630   64307 cache.go:56] Caching tarball of preloaded images
	I0829 19:22:06.648726   64307 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:22:06.648739   64307 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:22:06.648910   64307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/pause-518621/config.json ...
	I0829 19:22:06.649147   64307 start.go:360] acquireMachinesLock for pause-518621: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:22:08.347255   64307 start.go:364] duration metric: took 1.698077985s to acquireMachinesLock for "pause-518621"
	I0829 19:22:08.347323   64307 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:22:08.347332   64307 fix.go:54] fixHost starting: 
	I0829 19:22:08.347776   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:08.347825   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:08.368493   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0829 19:22:08.368962   64307 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:08.369484   64307 main.go:141] libmachine: Using API Version  1
	I0829 19:22:08.369509   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:08.369874   64307 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:08.370063   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:08.370236   64307 main.go:141] libmachine: (pause-518621) Calling .GetState
	I0829 19:22:08.371946   64307 fix.go:112] recreateIfNeeded on pause-518621: state=Running err=<nil>
	W0829 19:22:08.371976   64307 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:22:08.550441   64307 out.go:177] * Updating the running kvm2 "pause-518621" VM ...
	I0829 19:22:08.114559   63960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:22:08.114586   63960 machine.go:96] duration metric: took 6.693314723s to provisionDockerMachine
	I0829 19:22:08.114598   63960 start.go:293] postStartSetup for "kubernetes-upgrade-353455" (driver="kvm2")
	I0829 19:22:08.114607   63960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:22:08.114626   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.115022   63960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:22:08.115049   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:22:08.118095   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.118498   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.118529   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.118720   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:22:08.118905   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.119118   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:22:08.119320   63960 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:22:08.200131   63960 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:22:08.203930   63960 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:22:08.203953   63960 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:22:08.204015   63960 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:22:08.204112   63960 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:22:08.204234   63960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:22:08.213344   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:22:08.239778   63960 start.go:296] duration metric: took 125.16719ms for postStartSetup
	I0829 19:22:08.239819   63960 fix.go:56] duration metric: took 6.844921079s for fixHost
	I0829 19:22:08.239848   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:22:08.243125   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.243470   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.243500   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.243637   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:22:08.243812   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.244002   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.244175   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:22:08.244350   63960 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.244514   63960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0829 19:22:08.244530   63960 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:22:08.347128   63960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959328.335613934
	
	I0829 19:22:08.347147   63960 fix.go:216] guest clock: 1724959328.335613934
	I0829 19:22:08.347154   63960 fix.go:229] Guest: 2024-08-29 19:22:08.335613934 +0000 UTC Remote: 2024-08-29 19:22:08.239823526 +0000 UTC m=+34.502528738 (delta=95.790408ms)
	I0829 19:22:08.347171   63960 fix.go:200] guest clock delta is within tolerance: 95.790408ms
	I0829 19:22:08.347176   63960 start.go:83] releasing machines lock for "kubernetes-upgrade-353455", held for 6.952310233s
	I0829 19:22:08.347198   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.347465   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetIP
	I0829 19:22:08.350559   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.350972   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.351001   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.351129   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.351658   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.351847   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.351951   63960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:22:08.352005   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:22:08.352074   63960 ssh_runner.go:195] Run: cat /version.json
	I0829 19:22:08.352094   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:22:08.354669   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.355065   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.355102   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.355145   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.355405   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:22:08.355603   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.355622   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.355637   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.355759   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:22:08.355884   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:22:08.355923   63960 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:22:08.356454   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.356634   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:22:08.356766   63960 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:22:08.474033   63960 ssh_runner.go:195] Run: systemctl --version
	I0829 19:22:08.480891   63960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:22:08.646744   63960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:22:08.652962   63960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:22:08.653033   63960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:22:08.662404   63960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 19:22:08.662428   63960 start.go:495] detecting cgroup driver to use...
	I0829 19:22:08.662501   63960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:22:08.679704   63960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:22:08.693171   63960 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:22:08.693246   63960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:22:08.707627   63960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:22:08.722664   63960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:22:04.162844   63595 crio.go:462] duration metric: took 1.242688236s to copy over tarball
	I0829 19:22:04.162951   63595 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:22:06.319132   63595 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.156145348s)
	I0829 19:22:06.319163   63595 crio.go:469] duration metric: took 2.15628063s to extract the tarball
	I0829 19:22:06.319170   63595 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:22:06.358038   63595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:22:06.404153   63595 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:22:06.404174   63595 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:22:06.404184   63595 kubeadm.go:934] updating node { 192.168.72.204 8443 v1.31.0 crio true true} ...
	I0829 19:22:06.404300   63595 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-633326 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:auto-633326 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:22:06.404380   63595 ssh_runner.go:195] Run: crio config
	I0829 19:22:06.452166   63595 cni.go:84] Creating CNI manager for ""
	I0829 19:22:06.452189   63595 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:22:06.452206   63595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:22:06.452234   63595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.204 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-633326 NodeName:auto-633326 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:22:06.452430   63595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-633326"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:22:06.452501   63595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:22:06.462366   63595 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:22:06.462445   63595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:22:06.471649   63595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0829 19:22:06.489823   63595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:22:06.506237   63595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0829 19:22:06.524914   63595 ssh_runner.go:195] Run: grep 192.168.72.204	control-plane.minikube.internal$ /etc/hosts
	I0829 19:22:06.529063   63595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:22:06.542543   63595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:22:06.665775   63595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:22:06.682477   63595 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326 for IP: 192.168.72.204
	I0829 19:22:06.682502   63595 certs.go:194] generating shared ca certs ...
	I0829 19:22:06.682522   63595 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.682692   63595 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:22:06.682746   63595 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:22:06.682760   63595 certs.go:256] generating profile certs ...
	I0829 19:22:06.682822   63595 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.key
	I0829 19:22:06.682841   63595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt with IP's: []
	I0829 19:22:06.886677   63595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt ...
	I0829 19:22:06.886705   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: {Name:mk41f64f3a6ddca4ed8bd76984b3aabccc2281b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.886860   63595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.key ...
	I0829 19:22:06.886870   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.key: {Name:mke01efa75415e3f69863e323c0bb09f3a6c88b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.886944   63595 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key.b46938b6
	I0829 19:22:06.886958   63595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt.b46938b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.204]
	I0829 19:22:06.975367   63595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt.b46938b6 ...
	I0829 19:22:06.975395   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt.b46938b6: {Name:mk0921353250c97cd41cc56849feb45129d92a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.975545   63595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key.b46938b6 ...
	I0829 19:22:06.975557   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key.b46938b6: {Name:mk7ec497edc365eec664d690e74cb1682a30c355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.975637   63595 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt.b46938b6 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt
	I0829 19:22:06.975720   63595 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key.b46938b6 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key
	I0829 19:22:06.975776   63595 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.key
	I0829 19:22:06.975789   63595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.crt with IP's: []
	I0829 19:22:07.066728   63595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.crt ...
	I0829 19:22:07.066766   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.crt: {Name:mkd471572d263df053b52e4ac3de60fd35c451b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:07.066961   63595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.key ...
	I0829 19:22:07.066983   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.key: {Name:mke117647a722fca5d6b277e25571334a48c88ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:07.067156   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:22:07.067189   63595 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:22:07.067198   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:22:07.067219   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:22:07.067241   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:22:07.067262   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:22:07.067297   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:22:07.067919   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:22:07.092100   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:22:07.115710   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:22:07.139686   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:22:07.161866   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0829 19:22:07.184987   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 19:22:07.209673   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:22:07.232488   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:22:07.256030   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:22:07.278640   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:22:07.302359   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:22:07.327477   63595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:22:07.346983   63595 ssh_runner.go:195] Run: openssl version
	I0829 19:22:07.352825   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:22:07.371752   63595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:22:07.381442   63595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:22:07.381513   63595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:22:07.390314   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:22:07.405041   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:22:07.415262   63595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:07.419452   63595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:07.419504   63595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:07.425343   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:22:07.436289   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:22:07.451616   63595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:22:07.456249   63595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:22:07.456318   63595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:22:07.462273   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:22:07.476300   63595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:22:07.480866   63595 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 19:22:07.480924   63595 kubeadm.go:392] StartCluster: {Name:auto-633326 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:auto-633326 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.204 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:07.481016   63595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:22:07.481072   63595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:22:07.524915   63595 cri.go:89] found id: ""
	I0829 19:22:07.524996   63595 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:22:07.535128   63595 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:22:07.545781   63595 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:22:07.555227   63595 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:22:07.555247   63595 kubeadm.go:157] found existing configuration files:
	
	I0829 19:22:07.555296   63595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:22:07.564299   63595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:22:07.564371   63595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:22:07.576585   63595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:22:07.587693   63595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:22:07.587752   63595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:22:07.597305   63595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:22:07.607228   63595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:22:07.607276   63595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:22:07.618149   63595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:22:07.626961   63595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:22:07.627021   63595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:22:07.636514   63595 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:22:07.690607   63595 kubeadm.go:310] W0829 19:22:07.674107     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:22:07.691595   63595 kubeadm.go:310] W0829 19:22:07.675323     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:22:07.803110   63595 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:22:08.881843   63960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:22:09.078449   63960 docker.go:233] disabling docker service ...
	I0829 19:22:09.078533   63960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:22:09.097735   63960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:22:09.114065   63960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:22:09.264241   63960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:22:09.414749   63960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:22:09.430676   63960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:22:09.451678   63960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:22:09.451745   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.462248   63960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:22:09.462329   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.475080   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.486878   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.509052   63960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:22:09.520518   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.533169   63960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.547813   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.560086   63960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:22:09.571848   63960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:22:09.581914   63960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:22:09.744131   63960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:22:10.620391   63960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:22:10.620469   63960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:22:10.625940   63960 start.go:563] Will wait 60s for crictl version
	I0829 19:22:10.626010   63960 ssh_runner.go:195] Run: which crictl
	I0829 19:22:10.629569   63960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:22:10.676127   63960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:22:10.676218   63960 ssh_runner.go:195] Run: crio --version
	I0829 19:22:10.713956   63960 ssh_runner.go:195] Run: crio --version
	I0829 19:22:10.747555   63960 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:22:08.722026   64307 machine.go:93] provisionDockerMachine start ...
	I0829 19:22:08.722069   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:08.722439   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:08.726052   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.726492   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:08.726524   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.726729   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:08.726937   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.727124   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.727292   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:08.727554   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.727786   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:08.727802   64307 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:22:08.843036   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-518621
	
	I0829 19:22:08.843068   64307 main.go:141] libmachine: (pause-518621) Calling .GetMachineName
	I0829 19:22:08.843321   64307 buildroot.go:166] provisioning hostname "pause-518621"
	I0829 19:22:08.843350   64307 main.go:141] libmachine: (pause-518621) Calling .GetMachineName
	I0829 19:22:08.843539   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:08.846965   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.847413   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:08.847437   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.847621   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:08.847834   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.847964   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.848145   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:08.848330   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.848533   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:08.848548   64307 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-518621 && echo "pause-518621" | sudo tee /etc/hostname
	I0829 19:22:08.976480   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-518621
	
	I0829 19:22:08.976511   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:08.979685   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.980082   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:08.980117   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.980399   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:08.980639   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.980819   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.980959   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:08.981169   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.981413   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:08.981469   64307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-518621' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-518621/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-518621' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:22:09.083539   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:22:09.083574   64307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:22:09.083617   64307 buildroot.go:174] setting up certificates
	I0829 19:22:09.083631   64307 provision.go:84] configureAuth start
	I0829 19:22:09.083641   64307 main.go:141] libmachine: (pause-518621) Calling .GetMachineName
	I0829 19:22:09.083917   64307 main.go:141] libmachine: (pause-518621) Calling .GetIP
	I0829 19:22:09.086993   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.087524   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.087577   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.087752   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:09.090258   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.090527   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.090555   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.090675   64307 provision.go:143] copyHostCerts
	I0829 19:22:09.090733   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:22:09.090746   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:22:09.162320   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:22:09.162489   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:22:09.162502   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:22:09.162543   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:22:09.162620   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:22:09.162629   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:22:09.162660   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:22:09.162723   64307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.pause-518621 san=[127.0.0.1 192.168.61.203 localhost minikube pause-518621]
	I0829 19:22:09.520291   64307 provision.go:177] copyRemoteCerts
	I0829 19:22:09.520373   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:22:09.520413   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:09.523620   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.523990   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.524022   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.524271   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:09.524511   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:09.524733   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:09.524894   64307 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/pause-518621/id_rsa Username:docker}
	I0829 19:22:09.611312   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:22:09.639602   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:22:09.669692   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0829 19:22:09.705901   64307 provision.go:87] duration metric: took 622.256236ms to configureAuth
	I0829 19:22:09.705938   64307 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:22:09.706215   64307 config.go:182] Loaded profile config "pause-518621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:22:09.706332   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:09.709310   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.709726   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.709758   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.709943   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:09.710159   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:09.710330   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:09.710539   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:09.710714   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:09.710910   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:09.710932   64307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:22:10.748883   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetIP
	I0829 19:22:10.751623   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:10.752057   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:10.752087   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:10.752309   63960 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 19:22:10.756938   63960 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-353455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:22:10.757043   63960 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:22:10.757102   63960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:22:10.797885   63960 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:22:10.797914   63960 crio.go:433] Images already preloaded, skipping extraction
	I0829 19:22:10.797972   63960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:22:10.833343   63960 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:22:10.833366   63960 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:22:10.833375   63960 kubeadm.go:934] updating node { 192.168.50.102 8443 v1.31.0 crio true true} ...
	I0829 19:22:10.833500   63960 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-353455 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:22:10.833584   63960 ssh_runner.go:195] Run: crio config
	I0829 19:22:11.082681   63960 cni.go:84] Creating CNI manager for ""
	I0829 19:22:11.082717   63960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:22:11.082738   63960 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:22:11.082778   63960 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.102 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-353455 NodeName:kubernetes-upgrade-353455 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:22:11.082981   63960 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-353455"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:22:11.083081   63960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:22:11.181053   63960 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:22:11.181145   63960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:22:11.227656   63960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0829 19:22:11.352976   63960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:22:11.486618   63960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0829 19:22:11.589619   63960 ssh_runner.go:195] Run: grep 192.168.50.102	control-plane.minikube.internal$ /etc/hosts
	I0829 19:22:11.609228   63960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:22:11.948411   63960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:22:11.985258   63960 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455 for IP: 192.168.50.102
	I0829 19:22:11.985287   63960 certs.go:194] generating shared ca certs ...
	I0829 19:22:11.985309   63960 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:11.985534   63960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:22:11.985616   63960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:22:11.985633   63960 certs.go:256] generating profile certs ...
	I0829 19:22:11.985768   63960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/client.key
	I0829 19:22:11.985846   63960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.key.d93ce222
	I0829 19:22:11.985899   63960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.key
	I0829 19:22:11.986046   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:22:11.986117   63960 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:22:11.986131   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:22:11.986167   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:22:11.986214   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:22:11.986243   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:22:11.986311   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:22:11.991503   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:22:12.046976   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:22:12.162255   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:22:12.211953   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:22:12.244188   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0829 19:22:12.272134   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:22:12.302541   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:22:12.334884   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:22:12.394205   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:22:12.445842   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:22:12.499700   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:22:12.587552   63960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:22:12.654436   63960 ssh_runner.go:195] Run: openssl version
	I0829 19:22:12.670724   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:22:12.685960   63960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:12.690505   63960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:12.690567   63960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:12.698450   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:22:12.710185   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:22:12.723972   63960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:22:12.730197   63960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:22:12.730259   63960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:22:12.737837   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:22:12.748838   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:22:12.761782   63960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:22:12.766511   63960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:22:12.766573   63960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:22:12.772918   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:22:12.784575   63960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:22:12.788891   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:22:12.794996   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:22:12.800752   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:22:12.806803   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:22:12.812096   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:22:12.817499   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:22:12.823087   63960 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-353455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:12.823187   63960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:22:12.823257   63960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:22:12.896542   63960 cri.go:89] found id: "c7e0f44fc45d81ca34fca662b700345a22ba250ea849d66ac39f08b6e8a456f2"
	I0829 19:22:12.896570   63960 cri.go:89] found id: "c8cd73d6720d1b99801804e1b45a1e66c095ecc42c32b8e31be70e94ba9c4ac6"
	I0829 19:22:12.896577   63960 cri.go:89] found id: "1a78648db20de6182de8323fe55bf66304a08df29615812edc781b6f181bda98"
	I0829 19:22:12.896582   63960 cri.go:89] found id: "d390a833f84a3199f7a1e4020b262916b76f50a210ff2ee2a9ab18fd2786fc5d"
	I0829 19:22:12.896604   63960 cri.go:89] found id: "212a7de66df56120f26702b7d4288eeb909fc5d28bef83eec75437e632a1cfa2"
	I0829 19:22:12.896609   63960 cri.go:89] found id: "cf1d139c8dd93ae59eb53ed1b75cacf6052fdfa08ab988f2a806088370223dd0"
	I0829 19:22:12.896615   63960 cri.go:89] found id: "008c80c30cf67f6babdc10990eef1bdff506ebd2c0b40298813292b0cc269ebd"
	I0829 19:22:12.896619   63960 cri.go:89] found id: "0cd01fd8b57cf8f4e4b611390b809d76c0d79dfe675a582f411a5b6853b0ac5c"
	I0829 19:22:12.896623   63960 cri.go:89] found id: "b089c64d036f0349d5af067696bc01f28fb421669b56528167c94d2f0fc02808"
	I0829 19:22:12.896632   63960 cri.go:89] found id: "36b3fb146d05a158f24dab08aa4d54f194eeeaa0402b864428388d48c52e1073"
	I0829 19:22:12.896640   63960 cri.go:89] found id: "285e5ee3c69a9ecc036b0e95fe246a25aff705e9f2394440563359ff587bada7"
	I0829 19:22:12.896644   63960 cri.go:89] found id: "f537dc7a1b4a4b62a16b3dad35ee2633093b730e986c2461d312b3c7cc39dc90"
	I0829 19:22:12.896651   63960 cri.go:89] found id: "e05767faee629c2756c35722878496839934351dda4ee2bd3838c2986c7fcf3e"
	I0829 19:22:12.896655   63960 cri.go:89] found id: "9b563857dc4d3fa049193ff55c4f1810290a0b471d1a76434f996f7cbbf2df86"
	I0829 19:22:12.896663   63960 cri.go:89] found id: "214cbd72e3eb481ba9580536acabdc6d3bb6bf3a248a6cac0ad64a5149a1b4eb"
	I0829 19:22:12.896670   63960 cri.go:89] found id: ""
	I0829 19:22:12.896756   63960 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.448005457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d00abbe-4a72-4d9f-b589-2434d31e7a47 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.449361441Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=23c5687c-c8a9-401f-a9a1-8d9472c49fff name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.449760436Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:540ac31f915eb4738fabb04232631bee190c6e6148fc01c027aa1924f95e285d,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-rvxpb,Uid:670ff94f-8820-40e2-b7e1-2b4180f6ff93,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724959338651966282,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:21:55.589029396Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:203172b7233c7a3742fdf3dfc8d51a3d1cc02e8fc62a67361e21033edebacda4,Metadata:&PodSandboxMetadata{Name:kube-proxy-6xmsm,Uid:b54d05be-c00f-4fc4-b25f-126fc5e21687,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1724959338478082537,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:21:55.227265676Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5bd0a718df47ee43c7063593c7e5a86f809ce20ba0a9b3f2eebd228dec9df161,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-518621,Uid:845adfaa897323c582d0ae3d1493297e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724959338412150190,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845adfaa897323c582d0ae3d1493297e,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 845adfaa897323c582d0ae3d1493297e,kubernetes.io/config.seen: 2024-08-29T19:21:50.582440042Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c0f4cb9e5d4918ec16d55f5ba99dbd749f5c930b02bc91b6d46cd8da2cf52754,Metadata:&PodSandboxMetadata{Name:etcd-pause-518621,Uid:3e4d9b9c749be9cbff73417887c5ae5d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724959338404211631,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.203:2379,kubernetes.io/config.hash: 3e4d9b9c749be9cbff73417887c5ae5d,kubernetes.io/config.seen: 2024-08-29T19:21:50.582435609Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2a2063a99ed5a573ee287d3cf52dad376
b7700a4fbcf7b94bdb89c226f614e38,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-518621,Uid:93b2de5558c6115b2b50b8e9c44c789d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724959338402163070,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c6115b2b50b8e9c44c789d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 93b2de5558c6115b2b50b8e9c44c789d,kubernetes.io/config.seen: 2024-08-29T19:21:50.582440835Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eb294925da7af38ea4abbc32de45eb16ee3e8fb5f349a9a8c629307f52e16205,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-518621,Uid:0c02e517bf1d23dc9b63ad994dac8382,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724959338263741736,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c02e517bf1d23dc9b63ad994dac8382,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.203:8443,kubernetes.io/config.hash: 0c02e517bf1d23dc9b63ad994dac8382,kubernetes.io/config.seen: 2024-08-29T19:21:50.582438999Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5a42daab787dc418dc7b2880497775222aa635b2319040efd35fb1a55930609e,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-rvxpb,Uid:670ff94f-8820-40e2-b7e1-2b4180f6ff93,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724959336026456853,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-08-29T19:21:55.589029396Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:412f1d0875b3db1e919b7c88f980a2b8b04a8d2bc797067c67dadda91d4092bc,Metadata:&PodSandboxMetadata{Name:etcd-pause-518621,Uid:3e4d9b9c749be9cbff73417887c5ae5d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724959335925247699,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.203:2379,kubernetes.io/config.hash: 3e4d9b9c749be9cbff73417887c5ae5d,kubernetes.io/config.seen: 2024-08-29T19:21:50.582435609Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bf74783da3e501f6af68a895c5de74ab234ecf9003762debea3bd61cb83a6155,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-518621,Uid:93b2de55
58c6115b2b50b8e9c44c789d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724959335920408658,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c6115b2b50b8e9c44c789d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 93b2de5558c6115b2b50b8e9c44c789d,kubernetes.io/config.seen: 2024-08-29T19:21:50.582440835Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c8322732f9b18871d20b78944627d40b2325f6dd8e77ac0556f6825d33bf71a9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-518621,Uid:0c02e517bf1d23dc9b63ad994dac8382,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724959335886118675,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 0c02e517bf1d23dc9b63ad994dac8382,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.203:8443,kubernetes.io/config.hash: 0c02e517bf1d23dc9b63ad994dac8382,kubernetes.io/config.seen: 2024-08-29T19:21:50.582438999Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ae8c40adc0854321b3d3d7bce36e45f6b534294189b93399e7a76d46c16d1141,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-518621,Uid:845adfaa897323c582d0ae3d1493297e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724959335881177036,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845adfaa897323c582d0ae3d1493297e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 845adfaa897323c582d0ae3d1493297e,kubernetes.io/config.seen: 202
4-08-29T19:21:50.582440042Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5acf290f0c22b4ed27e4f3fdc3cd293196fdf293f57834e413c74b6f6d9705f6,Metadata:&PodSandboxMetadata{Name:kube-proxy-6xmsm,Uid:b54d05be-c00f-4fc4-b25f-126fc5e21687,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724959335868710166,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:21:55.227265676Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cb7ea02a8ea469ae470e82fb9f701e78fb761560dc45ba5216ffebf2afdc6af,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-x4hfc,Uid:a29eeba0-da21-4ed5-9a1f-c3dec86499b9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:17249593
15933571496,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-x4hfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29eeba0-da21-4ed5-9a1f-c3dec86499b9,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:21:55.615074053Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=23c5687c-c8a9-401f-a9a1-8d9472c49fff name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.450450916Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3435e82f-d38d-434d-87ce-6185c9afc8ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.450527849Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3435e82f-d38d-434d-87ce-6185c9afc8ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.450909132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68ee5301-dd7f-485d-bbff-5854424bf02a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.450900066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be6d8aee29bb3a210c92a6c79e00e794572640a7f9dc5276a6aa5f704807ef74,PodSandboxId:203172b7233c7a3742fdf3dfc8d51a3d1cc02e8fc62a67361e21033edebacda4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959345491387056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ca4a7259d27aad2c7d5c558bfe9888aa96db259a47c7cc493361a00d15d2b4,PodSandboxId:540ac31f915eb4738fabb04232631bee190c6e6148fc01c027aa1924f95e285d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959345503050334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d14fe423e9e909bdfc400bdfb16a7b43ebab837f8c0f6ef0dcb0b4d7915b6d,PodSandboxId:2a2063a99ed5a573ee287d3cf52dad376b7700a4fbcf7b94bdb89c226f614e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959341688770284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c
6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98be6be7b34e6fb984f2f07d5f222f9d3debe1defe3b3aee2e7845c78157b54c,PodSandboxId:5bd0a718df47ee43c7063593c7e5a86f809ce20ba0a9b3f2eebd228dec9df161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959341684212802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84
5adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a67fe21c04365d3d8ba7d0a5f3397f7780920454baf7a3f83599274b161ed5f,PodSandboxId:eb294925da7af38ea4abbc32de45eb16ee3e8fb5f349a9a8c629307f52e16205,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959341685874527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c02e517bf1d23dc9b63
ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cfee9c3aaa27ce72560b55ca68c53625d142ca059ccfbb1f7b3dc8fa1354a4,PodSandboxId:c0f4cb9e5d4918ec16d55f5ba99dbd749f5c930b02bc91b6d46cd8da2cf52754,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959341663165115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4,PodSandboxId:5a42daab787dc418dc7b2880497775222aa635b2319040efd35fb1a55930609e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959337224710333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c,PodSandboxId:5acf290f0c22b4ed27e4f3fdc3cd293196fdf293f57834e413c74b6f6d9705f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959336464947747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97,PodSandboxId:ae8c40adc0854321b3d3d7bce36e45f6b534294189b93399e7a76d46c16d1141,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959336588911703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: ku
be-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee,PodSandboxId:bf74783da3e501f6af68a895c5de74ab234ecf9003762debea3bd61cb83a6155,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959336507313467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedul
er-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50,PodSandboxId:412f1d0875b3db1e919b7c88f980a2b8b04a8d2bc797067c67dadda91d4092bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959336418853283,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6,PodSandboxId:c8322732f9b18871d20b78944627d40b2325f6dd8e77ac0556f6825d33bf71a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959336252364253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0c02e517bf1d23dc9b63ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3435e82f-d38d-434d-87ce-6185c9afc8ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.451354644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959363451323771,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68ee5301-dd7f-485d-bbff-5854424bf02a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.451871679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a412304f-0ae6-4a34-bb5d-027182aad555 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.451972809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a412304f-0ae6-4a34-bb5d-027182aad555 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.452200603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be6d8aee29bb3a210c92a6c79e00e794572640a7f9dc5276a6aa5f704807ef74,PodSandboxId:203172b7233c7a3742fdf3dfc8d51a3d1cc02e8fc62a67361e21033edebacda4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959345491387056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ca4a7259d27aad2c7d5c558bfe9888aa96db259a47c7cc493361a00d15d2b4,PodSandboxId:540ac31f915eb4738fabb04232631bee190c6e6148fc01c027aa1924f95e285d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959345503050334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d14fe423e9e909bdfc400bdfb16a7b43ebab837f8c0f6ef0dcb0b4d7915b6d,PodSandboxId:2a2063a99ed5a573ee287d3cf52dad376b7700a4fbcf7b94bdb89c226f614e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959341688770284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c
6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98be6be7b34e6fb984f2f07d5f222f9d3debe1defe3b3aee2e7845c78157b54c,PodSandboxId:5bd0a718df47ee43c7063593c7e5a86f809ce20ba0a9b3f2eebd228dec9df161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959341684212802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84
5adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a67fe21c04365d3d8ba7d0a5f3397f7780920454baf7a3f83599274b161ed5f,PodSandboxId:eb294925da7af38ea4abbc32de45eb16ee3e8fb5f349a9a8c629307f52e16205,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959341685874527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c02e517bf1d23dc9b63
ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cfee9c3aaa27ce72560b55ca68c53625d142ca059ccfbb1f7b3dc8fa1354a4,PodSandboxId:c0f4cb9e5d4918ec16d55f5ba99dbd749f5c930b02bc91b6d46cd8da2cf52754,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959341663165115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4,PodSandboxId:5a42daab787dc418dc7b2880497775222aa635b2319040efd35fb1a55930609e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959337224710333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c,PodSandboxId:5acf290f0c22b4ed27e4f3fdc3cd293196fdf293f57834e413c74b6f6d9705f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959336464947747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97,PodSandboxId:ae8c40adc0854321b3d3d7bce36e45f6b534294189b93399e7a76d46c16d1141,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959336588911703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: ku
be-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee,PodSandboxId:bf74783da3e501f6af68a895c5de74ab234ecf9003762debea3bd61cb83a6155,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959336507313467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedul
er-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50,PodSandboxId:412f1d0875b3db1e919b7c88f980a2b8b04a8d2bc797067c67dadda91d4092bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959336418853283,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6,PodSandboxId:c8322732f9b18871d20b78944627d40b2325f6dd8e77ac0556f6825d33bf71a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959336252364253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0c02e517bf1d23dc9b63ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a412304f-0ae6-4a34-bb5d-027182aad555 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.495698204Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ad97673-6bda-497b-8f4d-1189e8123a6d name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.495775541Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ad97673-6bda-497b-8f4d-1189e8123a6d name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.496993231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1672e1dc-5f81-49f1-8d7d-0243df2ba682 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.497538279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959363497512048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1672e1dc-5f81-49f1-8d7d-0243df2ba682 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.498135041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf3537e1-8364-4389-93f9-a5111348c4c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.498208583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf3537e1-8364-4389-93f9-a5111348c4c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.498493171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be6d8aee29bb3a210c92a6c79e00e794572640a7f9dc5276a6aa5f704807ef74,PodSandboxId:203172b7233c7a3742fdf3dfc8d51a3d1cc02e8fc62a67361e21033edebacda4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959345491387056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ca4a7259d27aad2c7d5c558bfe9888aa96db259a47c7cc493361a00d15d2b4,PodSandboxId:540ac31f915eb4738fabb04232631bee190c6e6148fc01c027aa1924f95e285d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959345503050334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d14fe423e9e909bdfc400bdfb16a7b43ebab837f8c0f6ef0dcb0b4d7915b6d,PodSandboxId:2a2063a99ed5a573ee287d3cf52dad376b7700a4fbcf7b94bdb89c226f614e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959341688770284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c
6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98be6be7b34e6fb984f2f07d5f222f9d3debe1defe3b3aee2e7845c78157b54c,PodSandboxId:5bd0a718df47ee43c7063593c7e5a86f809ce20ba0a9b3f2eebd228dec9df161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959341684212802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84
5adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a67fe21c04365d3d8ba7d0a5f3397f7780920454baf7a3f83599274b161ed5f,PodSandboxId:eb294925da7af38ea4abbc32de45eb16ee3e8fb5f349a9a8c629307f52e16205,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959341685874527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c02e517bf1d23dc9b63
ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cfee9c3aaa27ce72560b55ca68c53625d142ca059ccfbb1f7b3dc8fa1354a4,PodSandboxId:c0f4cb9e5d4918ec16d55f5ba99dbd749f5c930b02bc91b6d46cd8da2cf52754,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959341663165115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4,PodSandboxId:5a42daab787dc418dc7b2880497775222aa635b2319040efd35fb1a55930609e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959337224710333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c,PodSandboxId:5acf290f0c22b4ed27e4f3fdc3cd293196fdf293f57834e413c74b6f6d9705f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959336464947747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97,PodSandboxId:ae8c40adc0854321b3d3d7bce36e45f6b534294189b93399e7a76d46c16d1141,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959336588911703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: ku
be-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee,PodSandboxId:bf74783da3e501f6af68a895c5de74ab234ecf9003762debea3bd61cb83a6155,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959336507313467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedul
er-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50,PodSandboxId:412f1d0875b3db1e919b7c88f980a2b8b04a8d2bc797067c67dadda91d4092bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959336418853283,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6,PodSandboxId:c8322732f9b18871d20b78944627d40b2325f6dd8e77ac0556f6825d33bf71a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959336252364253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0c02e517bf1d23dc9b63ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf3537e1-8364-4389-93f9-a5111348c4c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.540192858Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=531c72c2-2f96-4c07-a024-d4439e2ee7a5 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.540273402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=531c72c2-2f96-4c07-a024-d4439e2ee7a5 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.541728506Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9381f1e-59e3-4d96-a92b-2fa3a0f4a056 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.542071144Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959363542049886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9381f1e-59e3-4d96-a92b-2fa3a0f4a056 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.542547338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=010a199b-6777-4f89-b696-4c73b013e3d9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.542599596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=010a199b-6777-4f89-b696-4c73b013e3d9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:43 pause-518621 crio[2828]: time="2024-08-29 19:22:43.542952478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be6d8aee29bb3a210c92a6c79e00e794572640a7f9dc5276a6aa5f704807ef74,PodSandboxId:203172b7233c7a3742fdf3dfc8d51a3d1cc02e8fc62a67361e21033edebacda4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959345491387056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ca4a7259d27aad2c7d5c558bfe9888aa96db259a47c7cc493361a00d15d2b4,PodSandboxId:540ac31f915eb4738fabb04232631bee190c6e6148fc01c027aa1924f95e285d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959345503050334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d14fe423e9e909bdfc400bdfb16a7b43ebab837f8c0f6ef0dcb0b4d7915b6d,PodSandboxId:2a2063a99ed5a573ee287d3cf52dad376b7700a4fbcf7b94bdb89c226f614e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959341688770284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c
6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98be6be7b34e6fb984f2f07d5f222f9d3debe1defe3b3aee2e7845c78157b54c,PodSandboxId:5bd0a718df47ee43c7063593c7e5a86f809ce20ba0a9b3f2eebd228dec9df161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959341684212802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84
5adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a67fe21c04365d3d8ba7d0a5f3397f7780920454baf7a3f83599274b161ed5f,PodSandboxId:eb294925da7af38ea4abbc32de45eb16ee3e8fb5f349a9a8c629307f52e16205,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959341685874527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c02e517bf1d23dc9b63
ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cfee9c3aaa27ce72560b55ca68c53625d142ca059ccfbb1f7b3dc8fa1354a4,PodSandboxId:c0f4cb9e5d4918ec16d55f5ba99dbd749f5c930b02bc91b6d46cd8da2cf52754,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959341663165115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4,PodSandboxId:5a42daab787dc418dc7b2880497775222aa635b2319040efd35fb1a55930609e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959337224710333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c,PodSandboxId:5acf290f0c22b4ed27e4f3fdc3cd293196fdf293f57834e413c74b6f6d9705f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959336464947747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97,PodSandboxId:ae8c40adc0854321b3d3d7bce36e45f6b534294189b93399e7a76d46c16d1141,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959336588911703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: ku
be-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee,PodSandboxId:bf74783da3e501f6af68a895c5de74ab234ecf9003762debea3bd61cb83a6155,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959336507313467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedul
er-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50,PodSandboxId:412f1d0875b3db1e919b7c88f980a2b8b04a8d2bc797067c67dadda91d4092bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959336418853283,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6,PodSandboxId:c8322732f9b18871d20b78944627d40b2325f6dd8e77ac0556f6825d33bf71a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959336252364253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0c02e517bf1d23dc9b63ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=010a199b-6777-4f89-b696-4c73b013e3d9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	56ca4a7259d27       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago      Running             coredns                   2                   540ac31f915eb       coredns-6f6b679f8f-rvxpb
	be6d8aee29bb3       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   18 seconds ago      Running             kube-proxy                2                   203172b7233c7       kube-proxy-6xmsm
	77d14fe423e9e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   21 seconds ago      Running             kube-scheduler            2                   2a2063a99ed5a       kube-scheduler-pause-518621
	4a67fe21c0436       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 seconds ago      Running             kube-apiserver            2                   eb294925da7af       kube-apiserver-pause-518621
	98be6be7b34e6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   21 seconds ago      Running             kube-controller-manager   2                   5bd0a718df47e       kube-controller-manager-pause-518621
	c9cfee9c3aaa2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   21 seconds ago      Running             etcd                      2                   c0f4cb9e5d491       etcd-pause-518621
	a306ba179d404       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   26 seconds ago      Exited              coredns                   1                   5a42daab787dc       coredns-6f6b679f8f-rvxpb
	bdf90b489bfec       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   27 seconds ago      Exited              kube-controller-manager   1                   ae8c40adc0854       kube-controller-manager-pause-518621
	cf17176a7ba5d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   27 seconds ago      Exited              kube-scheduler            1                   bf74783da3e50       kube-scheduler-pause-518621
	03cf670122c35       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   27 seconds ago      Exited              kube-proxy                1                   5acf290f0c22b       kube-proxy-6xmsm
	fb392b4b0fb0a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   27 seconds ago      Exited              etcd                      1                   412f1d0875b3d       etcd-pause-518621
	791dada70b1ab       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   27 seconds ago      Exited              kube-apiserver            1                   c8322732f9b18       kube-apiserver-pause-518621
	
	
	==> coredns [56ca4a7259d27aad2c7d5c558bfe9888aa96db259a47c7cc493361a00d15d2b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48846 - 36555 "HINFO IN 8690049870560730438.6411881968822899713. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011062256s
	
	
	==> coredns [a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4] <==
	
	
	==> describe nodes <==
	Name:               pause-518621
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-518621
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=pause-518621
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_21_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:21:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-518621
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:22:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:22:25 +0000   Thu, 29 Aug 2024 19:21:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:22:25 +0000   Thu, 29 Aug 2024 19:21:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:22:25 +0000   Thu, 29 Aug 2024 19:21:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:22:25 +0000   Thu, 29 Aug 2024 19:21:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.203
	  Hostname:    pause-518621
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ac65647b7ce4003907f5a59eb0e8c1c
	  System UUID:                3ac65647-b7ce-4003-907f-5a59eb0e8c1c
	  Boot ID:                    c3052a04-7f28-42d8-9c14-06eda5fd4094
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-rvxpb                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     48s
	  kube-system                 etcd-pause-518621                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         53s
	  kube-system                 kube-apiserver-pause-518621             250m (12%)    0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-controller-manager-pause-518621    200m (10%)    0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-proxy-6xmsm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-scheduler-pause-518621             100m (5%)     0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 47s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     53s                kubelet          Node pause-518621 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  53s                kubelet          Node pause-518621 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s                kubelet          Node pause-518621 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeReady                52s                kubelet          Node pause-518621 status is now: NodeReady
	  Normal  RegisteredNode           49s                node-controller  Node pause-518621 event: Registered Node pause-518621 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-518621 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-518621 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-518621 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-518621 event: Registered Node pause-518621 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.786739] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.056321] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058507] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.162537] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.150876] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.287587] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +4.053036] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.008855] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.067204] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.510735] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[  +0.079653] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.170552] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.217064] systemd-fstab-generator[1412]: Ignoring "noauto" option for root device
	[Aug29 19:22] kauditd_printk_skb: 98 callbacks suppressed
	[ +11.702933] systemd-fstab-generator[2398]: Ignoring "noauto" option for root device
	[  +0.335457] systemd-fstab-generator[2513]: Ignoring "noauto" option for root device
	[  +0.333918] systemd-fstab-generator[2639]: Ignoring "noauto" option for root device
	[  +0.288285] systemd-fstab-generator[2720]: Ignoring "noauto" option for root device
	[  +0.458354] systemd-fstab-generator[2818]: Ignoring "noauto" option for root device
	[  +1.526222] systemd-fstab-generator[3390]: Ignoring "noauto" option for root device
	[  +1.829605] systemd-fstab-generator[3508]: Ignoring "noauto" option for root device
	[  +0.085327] kauditd_printk_skb: 244 callbacks suppressed
	[  +7.653546] kauditd_printk_skb: 50 callbacks suppressed
	[ +10.477321] systemd-fstab-generator[3949]: Ignoring "noauto" option for root device
	
	
	==> etcd [c9cfee9c3aaa27ce72560b55ca68c53625d142ca059ccfbb1f7b3dc8fa1354a4] <==
	{"level":"info","ts":"2024-08-29T19:22:22.014415Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","added-peer-id":"3dce464254b32e20","added-peer-peer-urls":["https://192.168.61.203:2380"]}
	{"level":"info","ts":"2024-08-29T19:22:22.014537Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:22:22.014582Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:22:22.020133Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:22.027325Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T19:22:22.028763Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3dce464254b32e20","initial-advertise-peer-urls":["https://192.168.61.203:2380"],"listen-peer-urls":["https://192.168.61.203:2380"],"advertise-client-urls":["https://192.168.61.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:22:22.030689Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:22:22.030777Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-08-29T19:22:22.030804Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-08-29T19:22:23.679681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-29T19:22:23.679746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-29T19:22:23.679893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgPreVoteResp from 3dce464254b32e20 at term 2"}
	{"level":"info","ts":"2024-08-29T19:22:23.679913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became candidate at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:23.679920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgVoteResp from 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:23.679948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became leader at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:23.679957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dce464254b32e20 elected leader 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:23.686257Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3dce464254b32e20","local-member-attributes":"{Name:pause-518621 ClientURLs:[https://192.168.61.203:2379]}","request-path":"/0/members/3dce464254b32e20/attributes","cluster-id":"817eda555b894faf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:22:23.686259Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:22:23.686490Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:22:23.686950Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:22:23.687022Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T19:22:23.687599Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:23.687980Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:23.688456Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.203:2379"}
	{"level":"info","ts":"2024-08-29T19:22:23.689317Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50] <==
	{"level":"info","ts":"2024-08-29T19:22:16.974039Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-29T19:22:17.015500Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","commit-index":407}
	{"level":"info","ts":"2024-08-29T19:22:17.015823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-29T19:22:17.016080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became follower at term 2"}
	{"level":"info","ts":"2024-08-29T19:22:17.016179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 3dce464254b32e20 [peers: [], term: 2, commit: 407, applied: 0, lastindex: 407, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-29T19:22:17.022228Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-29T19:22:17.028434Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":393}
	{"level":"info","ts":"2024-08-29T19:22:17.031743Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-29T19:22:17.038477Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"3dce464254b32e20","timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:22:17.041511Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"3dce464254b32e20"}
	{"level":"info","ts":"2024-08-29T19:22:17.041599Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"3dce464254b32e20","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-29T19:22:17.041896Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-29T19:22:17.043315Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:17.056125Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:22:17.056345Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:22:17.056421Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:22:17.056788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 switched to configuration voters=(4453574332218813984)"}
	{"level":"info","ts":"2024-08-29T19:22:17.058353Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","added-peer-id":"3dce464254b32e20","added-peer-peer-urls":["https://192.168.61.203:2380"]}
	{"level":"info","ts":"2024-08-29T19:22:17.061023Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:22:17.061124Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:22:17.066286Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T19:22:17.066593Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3dce464254b32e20","initial-advertise-peer-urls":["https://192.168.61.203:2380"],"listen-peer-urls":["https://192.168.61.203:2380"],"advertise-client-urls":["https://192.168.61.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:22:17.068526Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:22:17.066418Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-08-29T19:22:17.072883Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.203:2380"}
	
	
	==> kernel <==
	 19:22:43 up 1 min,  0 users,  load average: 2.29, 0.78, 0.27
	Linux pause-518621 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4a67fe21c04365d3d8ba7d0a5f3397f7780920454baf7a3f83599274b161ed5f] <==
	I0829 19:22:25.072737       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0829 19:22:25.073146       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0829 19:22:25.077721       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 19:22:25.077780       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 19:22:25.078039       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0829 19:22:25.078155       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 19:22:25.086697       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0829 19:22:25.089050       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0829 19:22:25.095141       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0829 19:22:25.095601       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 19:22:25.095693       1 policy_source.go:224] refreshing policies
	I0829 19:22:25.095757       1 aggregator.go:171] initial CRD sync complete...
	I0829 19:22:25.095785       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 19:22:25.095807       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 19:22:25.095828       1 cache.go:39] Caches are synced for autoregister controller
	I0829 19:22:25.097781       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0829 19:22:25.108843       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0829 19:22:25.975853       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0829 19:22:26.367436       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 19:22:26.412589       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 19:22:26.454382       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 19:22:26.484185       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0829 19:22:26.491529       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0829 19:22:28.525405       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 19:22:28.678545       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6] <==
	I0829 19:22:17.222325       1 options.go:228] external host was not specified, using 192.168.61.203
	I0829 19:22:17.248563       1 server.go:142] Version: v1.31.0
	I0829 19:22:17.248724       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [98be6be7b34e6fb984f2f07d5f222f9d3debe1defe3b3aee2e7845c78157b54c] <==
	I0829 19:22:28.373143       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0829 19:22:28.373188       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0829 19:22:28.373238       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0829 19:22:28.373304       1 shared_informer.go:320] Caches are synced for PV protection
	I0829 19:22:28.373352       1 shared_informer.go:320] Caches are synced for GC
	I0829 19:22:28.373408       1 shared_informer.go:320] Caches are synced for ephemeral
	I0829 19:22:28.378418       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0829 19:22:28.384493       1 shared_informer.go:320] Caches are synced for crt configmap
	I0829 19:22:28.388222       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0829 19:22:28.421470       1 shared_informer.go:320] Caches are synced for daemon sets
	I0829 19:22:28.427949       1 shared_informer.go:320] Caches are synced for stateful set
	I0829 19:22:28.518463       1 shared_informer.go:320] Caches are synced for service account
	I0829 19:22:28.527962       1 shared_informer.go:320] Caches are synced for namespace
	I0829 19:22:28.528337       1 shared_informer.go:320] Caches are synced for HPA
	I0829 19:22:28.531881       1 shared_informer.go:320] Caches are synced for disruption
	I0829 19:22:28.550782       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0829 19:22:28.575459       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 19:22:28.579419       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 19:22:28.588446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="264.793771ms"
	I0829 19:22:28.588807       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="77.754µs"
	I0829 19:22:29.008752       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 19:22:29.022479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 19:22:29.022591       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0829 19:22:34.468323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="14.280917ms"
	I0829 19:22:34.468722       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="50.303µs"
	
	
	==> kube-controller-manager [bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97] <==
	
	
	==> kube-proxy [03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c] <==
	
	
	==> kube-proxy [be6d8aee29bb3a210c92a6c79e00e794572640a7f9dc5276a6aa5f704807ef74] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:22:25.699377       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:22:25.706422       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.203"]
	E0829 19:22:25.706502       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:22:25.738218       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:22:25.738269       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:22:25.738292       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:22:25.740570       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:22:25.740912       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:22:25.740934       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:22:25.742051       1 config.go:197] "Starting service config controller"
	I0829 19:22:25.742095       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:22:25.742116       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:22:25.742120       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:22:25.742598       1 config.go:326] "Starting node config controller"
	I0829 19:22:25.742673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:22:25.842832       1 shared_informer.go:320] Caches are synced for node config
	I0829 19:22:25.842912       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:22:25.842857       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [77d14fe423e9e909bdfc400bdfb16a7b43ebab837f8c0f6ef0dcb0b4d7915b6d] <==
	I0829 19:22:22.500825       1 serving.go:386] Generated self-signed cert in-memory
	W0829 19:22:25.015960       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 19:22:25.016167       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 19:22:25.016200       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 19:22:25.016270       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 19:22:25.101403       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 19:22:25.104863       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:22:25.108601       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 19:22:25.108808       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 19:22:25.108869       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 19:22:25.113456       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 19:22:25.215683       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee] <==
	
	
	==> kubelet <==
	Aug 29 19:22:21 pause-518621 kubelet[3515]: I0829 19:22:21.414216    3515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/845adfaa897323c582d0ae3d1493297e-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-518621\" (UID: \"845adfaa897323c582d0ae3d1493297e\") " pod="kube-system/kube-controller-manager-pause-518621"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: E0829 19:22:21.414846    3515 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-518621?timeout=10s\": dial tcp 192.168.61.203:8443: connect: connection refused" interval="400ms"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: I0829 19:22:21.609435    3515 kubelet_node_status.go:72] "Attempting to register node" node="pause-518621"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: E0829 19:22:21.610430    3515 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.203:8443: connect: connection refused" node="pause-518621"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: I0829 19:22:21.635814    3515 scope.go:117] "RemoveContainer" containerID="fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: I0829 19:22:21.637726    3515 scope.go:117] "RemoveContainer" containerID="791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: I0829 19:22:21.639963    3515 scope.go:117] "RemoveContainer" containerID="bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: I0829 19:22:21.640292    3515 scope.go:117] "RemoveContainer" containerID="cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: E0829 19:22:21.817145    3515 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-518621?timeout=10s\": dial tcp 192.168.61.203:8443: connect: connection refused" interval="800ms"
	Aug 29 19:22:22 pause-518621 kubelet[3515]: I0829 19:22:22.011517    3515 kubelet_node_status.go:72] "Attempting to register node" node="pause-518621"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.167542    3515 apiserver.go:52] "Watching apiserver"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.212874    3515 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.213848    3515 kubelet_node_status.go:111] "Node was previously registered" node="pause-518621"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.213963    3515 kubelet_node_status.go:75] "Successfully registered node" node="pause-518621"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.214016    3515 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.216037    3515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.296928    3515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b54d05be-c00f-4fc4-b25f-126fc5e21687-xtables-lock\") pod \"kube-proxy-6xmsm\" (UID: \"b54d05be-c00f-4fc4-b25f-126fc5e21687\") " pod="kube-system/kube-proxy-6xmsm"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.296973    3515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b54d05be-c00f-4fc4-b25f-126fc5e21687-lib-modules\") pod \"kube-proxy-6xmsm\" (UID: \"b54d05be-c00f-4fc4-b25f-126fc5e21687\") " pod="kube-system/kube-proxy-6xmsm"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.473266    3515 scope.go:117] "RemoveContainer" containerID="03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.473520    3515 scope.go:117] "RemoveContainer" containerID="a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4"
	Aug 29 19:22:31 pause-518621 kubelet[3515]: E0829 19:22:31.309801    3515 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959351309460939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:22:31 pause-518621 kubelet[3515]: E0829 19:22:31.309847    3515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959351309460939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:22:34 pause-518621 kubelet[3515]: I0829 19:22:34.438766    3515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 29 19:22:41 pause-518621 kubelet[3515]: E0829 19:22:41.311579    3515 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959361310811394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:22:41 pause-518621 kubelet[3515]: E0829 19:22:41.312366    3515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959361310811394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:22:43.099206   64566 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19531-13056/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-518621 -n pause-518621
helpers_test.go:261: (dbg) Run:  kubectl --context pause-518621 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-518621 -n pause-518621
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-518621 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-518621 logs -n 25: (1.34721144s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-633326 sudo cat                  | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                      | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | cri-dockerd --version                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                      | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | systemctl status containerd                |                           |         |         |                     |                     |
	|         | --all --full --no-pager                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                      | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | systemctl cat containerd                   |                           |         |         |                     |                     |
	|         | --no-pager                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo cat                  | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | /lib/systemd/system/containerd.service     |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo cat                  | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | /etc/containerd/config.toml                |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                      | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | containerd config dump                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                      | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | systemctl status crio --all                |                           |         |         |                     |                     |
	|         | --full --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo                      | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | systemctl cat crio --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo find                 | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | /etc/crio -type f -exec sh -c              |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                           |         |         |                     |                     |
	| ssh     | -p cilium-633326 sudo crio                 | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC |                     |
	|         | config                                     |                           |         |         |                     |                     |
	| delete  | -p cilium-633326                           | cilium-633326             | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC | 29 Aug 24 19:19 UTC |
	| start   | -p cert-options-034564                     | cert-options-034564       | jenkins | v1.33.1 | 29 Aug 24 19:19 UTC | 29 Aug 24 19:21 UTC |
	|         | --memory=2048                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15              |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com           |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                      |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-353455               | kubernetes-upgrade-353455 | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:20 UTC |
	| start   | -p kubernetes-upgrade-353455               | kubernetes-upgrade-353455 | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:21 UTC |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-523972 ssh cat          | force-systemd-flag-523972 | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:20 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf         |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-523972               | force-systemd-flag-523972 | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:20 UTC |
	| start   | -p pause-518621 --memory=2048              | pause-518621              | jenkins | v1.33.1 | 29 Aug 24 19:20 UTC | 29 Aug 24 19:22 UTC |
	|         | --install-addons=false                     |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| ssh     | cert-options-034564 ssh                    | cert-options-034564       | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC | 29 Aug 24 19:21 UTC |
	|         | openssl x509 -text -noout -in              |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt      |                           |         |         |                     |                     |
	| ssh     | -p cert-options-034564 -- sudo             | cert-options-034564       | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC | 29 Aug 24 19:21 UTC |
	|         | cat /etc/kubernetes/admin.conf             |                           |         |         |                     |                     |
	| delete  | -p cert-options-034564                     | cert-options-034564       | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC | 29 Aug 24 19:21 UTC |
	| start   | -p auto-633326 --memory=3072               | auto-633326               | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC |                     |
	|         | --alsologtostderr --wait=true              |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-353455               | kubernetes-upgrade-353455 | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-353455               | kubernetes-upgrade-353455 | jenkins | v1.33.1 | 29 Aug 24 19:21 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p pause-518621                            | pause-518621              | jenkins | v1.33.1 | 29 Aug 24 19:22 UTC | 29 Aug 24 19:22 UTC |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:22:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:22:06.543047   64307 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:22:06.543136   64307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:22:06.543143   64307 out.go:358] Setting ErrFile to fd 2...
	I0829 19:22:06.543147   64307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:22:06.543333   64307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:22:06.543919   64307 out.go:352] Setting JSON to false
	I0829 19:22:06.544855   64307 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7474,"bootTime":1724951853,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:22:06.544918   64307 start.go:139] virtualization: kvm guest
	I0829 19:22:06.547020   64307 out.go:177] * [pause-518621] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:22:06.548267   64307 notify.go:220] Checking for updates...
	I0829 19:22:06.548290   64307 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:22:06.549457   64307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:22:06.550545   64307 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:22:06.551572   64307 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:22:06.552629   64307 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:22:06.553879   64307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:22:06.555449   64307 config.go:182] Loaded profile config "pause-518621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:22:06.556008   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:06.556072   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:06.571521   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I0829 19:22:06.572004   64307 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:06.572569   64307 main.go:141] libmachine: Using API Version  1
	I0829 19:22:06.572593   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:06.572979   64307 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:06.573186   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:06.573422   64307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:22:06.573774   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:06.573811   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:06.588552   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43773
	I0829 19:22:06.589111   64307 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:06.589660   64307 main.go:141] libmachine: Using API Version  1
	I0829 19:22:06.589684   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:06.590034   64307 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:06.590283   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:06.626910   64307 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:22:06.628114   64307 start.go:297] selected driver: kvm2
	I0829 19:22:06.628134   64307 start.go:901] validating driver "kvm2" against &{Name:pause-518621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-518621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:06.628330   64307 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:22:06.628800   64307 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:22:06.628902   64307 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:22:06.644356   64307 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:22:06.645155   64307 cni.go:84] Creating CNI manager for ""
	I0829 19:22:06.645172   64307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:22:06.645238   64307 start.go:340] cluster config:
	{Name:pause-518621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-518621 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:06.645392   64307 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:22:06.647209   64307 out.go:177] * Starting "pause-518621" primary control-plane node in "pause-518621" cluster
	I0829 19:22:06.648577   64307 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:22:06.648622   64307 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:22:06.648630   64307 cache.go:56] Caching tarball of preloaded images
	I0829 19:22:06.648726   64307 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:22:06.648739   64307 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:22:06.648910   64307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/pause-518621/config.json ...
	I0829 19:22:06.649147   64307 start.go:360] acquireMachinesLock for pause-518621: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:22:08.347255   64307 start.go:364] duration metric: took 1.698077985s to acquireMachinesLock for "pause-518621"
	I0829 19:22:08.347323   64307 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:22:08.347332   64307 fix.go:54] fixHost starting: 
	I0829 19:22:08.347776   64307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:22:08.347825   64307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:22:08.368493   64307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0829 19:22:08.368962   64307 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:22:08.369484   64307 main.go:141] libmachine: Using API Version  1
	I0829 19:22:08.369509   64307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:22:08.369874   64307 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:22:08.370063   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:08.370236   64307 main.go:141] libmachine: (pause-518621) Calling .GetState
	I0829 19:22:08.371946   64307 fix.go:112] recreateIfNeeded on pause-518621: state=Running err=<nil>
	W0829 19:22:08.371976   64307 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:22:08.550441   64307 out.go:177] * Updating the running kvm2 "pause-518621" VM ...
	I0829 19:22:08.114559   63960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:22:08.114586   63960 machine.go:96] duration metric: took 6.693314723s to provisionDockerMachine
	I0829 19:22:08.114598   63960 start.go:293] postStartSetup for "kubernetes-upgrade-353455" (driver="kvm2")
	I0829 19:22:08.114607   63960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:22:08.114626   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.115022   63960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:22:08.115049   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:22:08.118095   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.118498   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.118529   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.118720   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:22:08.118905   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.119118   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:22:08.119320   63960 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:22:08.200131   63960 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:22:08.203930   63960 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:22:08.203953   63960 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:22:08.204015   63960 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:22:08.204112   63960 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:22:08.204234   63960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:22:08.213344   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:22:08.239778   63960 start.go:296] duration metric: took 125.16719ms for postStartSetup
	I0829 19:22:08.239819   63960 fix.go:56] duration metric: took 6.844921079s for fixHost
	I0829 19:22:08.239848   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:22:08.243125   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.243470   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.243500   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.243637   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:22:08.243812   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.244002   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.244175   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:22:08.244350   63960 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.244514   63960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0829 19:22:08.244530   63960 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:22:08.347128   63960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959328.335613934
	
	I0829 19:22:08.347147   63960 fix.go:216] guest clock: 1724959328.335613934
	I0829 19:22:08.347154   63960 fix.go:229] Guest: 2024-08-29 19:22:08.335613934 +0000 UTC Remote: 2024-08-29 19:22:08.239823526 +0000 UTC m=+34.502528738 (delta=95.790408ms)
	I0829 19:22:08.347171   63960 fix.go:200] guest clock delta is within tolerance: 95.790408ms
	I0829 19:22:08.347176   63960 start.go:83] releasing machines lock for "kubernetes-upgrade-353455", held for 6.952310233s
	I0829 19:22:08.347198   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.347465   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetIP
	I0829 19:22:08.350559   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.350972   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.351001   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.351129   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.351658   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.351847   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .DriverName
	I0829 19:22:08.351951   63960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:22:08.352005   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:22:08.352074   63960 ssh_runner.go:195] Run: cat /version.json
	I0829 19:22:08.352094   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHHostname
	I0829 19:22:08.354669   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.355065   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.355102   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.355145   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.355405   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:22:08.355603   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:08.355622   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.355637   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:08.355759   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:22:08.355884   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHPort
	I0829 19:22:08.355923   63960 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:22:08.356454   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHKeyPath
	I0829 19:22:08.356634   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetSSHUsername
	I0829 19:22:08.356766   63960 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/kubernetes-upgrade-353455/id_rsa Username:docker}
	I0829 19:22:08.474033   63960 ssh_runner.go:195] Run: systemctl --version
	I0829 19:22:08.480891   63960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:22:08.646744   63960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:22:08.652962   63960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:22:08.653033   63960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:22:08.662404   63960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 19:22:08.662428   63960 start.go:495] detecting cgroup driver to use...
	I0829 19:22:08.662501   63960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:22:08.679704   63960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:22:08.693171   63960 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:22:08.693246   63960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:22:08.707627   63960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:22:08.722664   63960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:22:04.162844   63595 crio.go:462] duration metric: took 1.242688236s to copy over tarball
	I0829 19:22:04.162951   63595 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:22:06.319132   63595 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.156145348s)
	I0829 19:22:06.319163   63595 crio.go:469] duration metric: took 2.15628063s to extract the tarball
	I0829 19:22:06.319170   63595 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:22:06.358038   63595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:22:06.404153   63595 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:22:06.404174   63595 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:22:06.404184   63595 kubeadm.go:934] updating node { 192.168.72.204 8443 v1.31.0 crio true true} ...
	I0829 19:22:06.404300   63595 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-633326 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:auto-633326 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:22:06.404380   63595 ssh_runner.go:195] Run: crio config
	I0829 19:22:06.452166   63595 cni.go:84] Creating CNI manager for ""
	I0829 19:22:06.452189   63595 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:22:06.452206   63595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:22:06.452234   63595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.204 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-633326 NodeName:auto-633326 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:22:06.452430   63595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-633326"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:22:06.452501   63595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:22:06.462366   63595 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:22:06.462445   63595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:22:06.471649   63595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0829 19:22:06.489823   63595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:22:06.506237   63595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0829 19:22:06.524914   63595 ssh_runner.go:195] Run: grep 192.168.72.204	control-plane.minikube.internal$ /etc/hosts
	I0829 19:22:06.529063   63595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:22:06.542543   63595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:22:06.665775   63595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:22:06.682477   63595 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326 for IP: 192.168.72.204
	I0829 19:22:06.682502   63595 certs.go:194] generating shared ca certs ...
	I0829 19:22:06.682522   63595 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.682692   63595 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:22:06.682746   63595 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:22:06.682760   63595 certs.go:256] generating profile certs ...
	I0829 19:22:06.682822   63595 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.key
	I0829 19:22:06.682841   63595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt with IP's: []
	I0829 19:22:06.886677   63595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt ...
	I0829 19:22:06.886705   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: {Name:mk41f64f3a6ddca4ed8bd76984b3aabccc2281b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.886860   63595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.key ...
	I0829 19:22:06.886870   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.key: {Name:mke01efa75415e3f69863e323c0bb09f3a6c88b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.886944   63595 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key.b46938b6
	I0829 19:22:06.886958   63595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt.b46938b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.204]
	I0829 19:22:06.975367   63595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt.b46938b6 ...
	I0829 19:22:06.975395   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt.b46938b6: {Name:mk0921353250c97cd41cc56849feb45129d92a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.975545   63595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key.b46938b6 ...
	I0829 19:22:06.975557   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key.b46938b6: {Name:mk7ec497edc365eec664d690e74cb1682a30c355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:06.975637   63595 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt.b46938b6 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt
	I0829 19:22:06.975720   63595 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key.b46938b6 -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key
	I0829 19:22:06.975776   63595 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.key
	I0829 19:22:06.975789   63595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.crt with IP's: []
	I0829 19:22:07.066728   63595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.crt ...
	I0829 19:22:07.066766   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.crt: {Name:mkd471572d263df053b52e4ac3de60fd35c451b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:07.066961   63595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.key ...
	I0829 19:22:07.066983   63595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.key: {Name:mke117647a722fca5d6b277e25571334a48c88ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:07.067156   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:22:07.067189   63595 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:22:07.067198   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:22:07.067219   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:22:07.067241   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:22:07.067262   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:22:07.067297   63595 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:22:07.067919   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:22:07.092100   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:22:07.115710   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:22:07.139686   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:22:07.161866   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0829 19:22:07.184987   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 19:22:07.209673   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:22:07.232488   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:22:07.256030   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:22:07.278640   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:22:07.302359   63595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:22:07.327477   63595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:22:07.346983   63595 ssh_runner.go:195] Run: openssl version
	I0829 19:22:07.352825   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:22:07.371752   63595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:22:07.381442   63595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:22:07.381513   63595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:22:07.390314   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:22:07.405041   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:22:07.415262   63595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:07.419452   63595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:07.419504   63595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:07.425343   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:22:07.436289   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:22:07.451616   63595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:22:07.456249   63595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:22:07.456318   63595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:22:07.462273   63595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:22:07.476300   63595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:22:07.480866   63595 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 19:22:07.480924   63595 kubeadm.go:392] StartCluster: {Name:auto-633326 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:auto-633326 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.204 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:07.481016   63595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:22:07.481072   63595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:22:07.524915   63595 cri.go:89] found id: ""
	I0829 19:22:07.524996   63595 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:22:07.535128   63595 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:22:07.545781   63595 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:22:07.555227   63595 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:22:07.555247   63595 kubeadm.go:157] found existing configuration files:
	
	I0829 19:22:07.555296   63595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:22:07.564299   63595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:22:07.564371   63595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:22:07.576585   63595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:22:07.587693   63595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:22:07.587752   63595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:22:07.597305   63595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:22:07.607228   63595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:22:07.607276   63595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:22:07.618149   63595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:22:07.626961   63595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:22:07.627021   63595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:22:07.636514   63595 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:22:07.690607   63595 kubeadm.go:310] W0829 19:22:07.674107     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:22:07.691595   63595 kubeadm.go:310] W0829 19:22:07.675323     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:22:07.803110   63595 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:22:08.881843   63960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:22:09.078449   63960 docker.go:233] disabling docker service ...
	I0829 19:22:09.078533   63960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:22:09.097735   63960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:22:09.114065   63960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:22:09.264241   63960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:22:09.414749   63960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:22:09.430676   63960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:22:09.451678   63960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:22:09.451745   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.462248   63960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:22:09.462329   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.475080   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.486878   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.509052   63960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:22:09.520518   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.533169   63960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.547813   63960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:22:09.560086   63960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:22:09.571848   63960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:22:09.581914   63960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:22:09.744131   63960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:22:10.620391   63960 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:22:10.620469   63960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:22:10.625940   63960 start.go:563] Will wait 60s for crictl version
	I0829 19:22:10.626010   63960 ssh_runner.go:195] Run: which crictl
	I0829 19:22:10.629569   63960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:22:10.676127   63960 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:22:10.676218   63960 ssh_runner.go:195] Run: crio --version
	I0829 19:22:10.713956   63960 ssh_runner.go:195] Run: crio --version
	I0829 19:22:10.747555   63960 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:22:08.722026   64307 machine.go:93] provisionDockerMachine start ...
	I0829 19:22:08.722069   64307 main.go:141] libmachine: (pause-518621) Calling .DriverName
	I0829 19:22:08.722439   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:08.726052   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.726492   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:08.726524   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.726729   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:08.726937   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.727124   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.727292   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:08.727554   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.727786   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:08.727802   64307 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:22:08.843036   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-518621
	
	I0829 19:22:08.843068   64307 main.go:141] libmachine: (pause-518621) Calling .GetMachineName
	I0829 19:22:08.843321   64307 buildroot.go:166] provisioning hostname "pause-518621"
	I0829 19:22:08.843350   64307 main.go:141] libmachine: (pause-518621) Calling .GetMachineName
	I0829 19:22:08.843539   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:08.846965   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.847413   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:08.847437   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.847621   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:08.847834   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.847964   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.848145   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:08.848330   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.848533   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:08.848548   64307 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-518621 && echo "pause-518621" | sudo tee /etc/hostname
	I0829 19:22:08.976480   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-518621
	
	I0829 19:22:08.976511   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:08.979685   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.980082   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:08.980117   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:08.980399   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:08.980639   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.980819   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:08.980959   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:08.981169   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:08.981413   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:08.981469   64307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-518621' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-518621/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-518621' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:22:09.083539   64307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:22:09.083574   64307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:22:09.083617   64307 buildroot.go:174] setting up certificates
	I0829 19:22:09.083631   64307 provision.go:84] configureAuth start
	I0829 19:22:09.083641   64307 main.go:141] libmachine: (pause-518621) Calling .GetMachineName
	I0829 19:22:09.083917   64307 main.go:141] libmachine: (pause-518621) Calling .GetIP
	I0829 19:22:09.086993   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.087524   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.087577   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.087752   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:09.090258   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.090527   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.090555   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.090675   64307 provision.go:143] copyHostCerts
	I0829 19:22:09.090733   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:22:09.090746   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:22:09.162320   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:22:09.162489   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:22:09.162502   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:22:09.162543   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:22:09.162620   64307 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:22:09.162629   64307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:22:09.162660   64307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:22:09.162723   64307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.pause-518621 san=[127.0.0.1 192.168.61.203 localhost minikube pause-518621]
	I0829 19:22:09.520291   64307 provision.go:177] copyRemoteCerts
	I0829 19:22:09.520373   64307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:22:09.520413   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:09.523620   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.523990   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.524022   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.524271   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:09.524511   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:09.524733   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:09.524894   64307 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/pause-518621/id_rsa Username:docker}
	I0829 19:22:09.611312   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:22:09.639602   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:22:09.669692   64307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0829 19:22:09.705901   64307 provision.go:87] duration metric: took 622.256236ms to configureAuth
	I0829 19:22:09.705938   64307 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:22:09.706215   64307 config.go:182] Loaded profile config "pause-518621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:22:09.706332   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHHostname
	I0829 19:22:09.709310   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.709726   64307 main.go:141] libmachine: (pause-518621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e4:49", ip: ""} in network mk-pause-518621: {Iface:virbr3 ExpiryTime:2024-08-29 20:21:25 +0000 UTC Type:0 Mac:52:54:00:3e:e4:49 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-518621 Clientid:01:52:54:00:3e:e4:49}
	I0829 19:22:09.709758   64307 main.go:141] libmachine: (pause-518621) DBG | domain pause-518621 has defined IP address 192.168.61.203 and MAC address 52:54:00:3e:e4:49 in network mk-pause-518621
	I0829 19:22:09.709943   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHPort
	I0829 19:22:09.710159   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:09.710330   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHKeyPath
	I0829 19:22:09.710539   64307 main.go:141] libmachine: (pause-518621) Calling .GetSSHUsername
	I0829 19:22:09.710714   64307 main.go:141] libmachine: Using SSH client type: native
	I0829 19:22:09.710910   64307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0829 19:22:09.710932   64307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:22:10.748883   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) Calling .GetIP
	I0829 19:22:10.751623   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:10.752057   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:18:17", ip: ""} in network mk-kubernetes-upgrade-353455: {Iface:virbr2 ExpiryTime:2024-08-29 20:21:01 +0000 UTC Type:0 Mac:52:54:00:13:18:17 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:kubernetes-upgrade-353455 Clientid:01:52:54:00:13:18:17}
	I0829 19:22:10.752087   63960 main.go:141] libmachine: (kubernetes-upgrade-353455) DBG | domain kubernetes-upgrade-353455 has defined IP address 192.168.50.102 and MAC address 52:54:00:13:18:17 in network mk-kubernetes-upgrade-353455
	I0829 19:22:10.752309   63960 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 19:22:10.756938   63960 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-353455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:22:10.757043   63960 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:22:10.757102   63960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:22:10.797885   63960 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:22:10.797914   63960 crio.go:433] Images already preloaded, skipping extraction
	I0829 19:22:10.797972   63960 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:22:10.833343   63960 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:22:10.833366   63960 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:22:10.833375   63960 kubeadm.go:934] updating node { 192.168.50.102 8443 v1.31.0 crio true true} ...
	I0829 19:22:10.833500   63960 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-353455 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:22:10.833584   63960 ssh_runner.go:195] Run: crio config
	I0829 19:22:11.082681   63960 cni.go:84] Creating CNI manager for ""
	I0829 19:22:11.082717   63960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:22:11.082738   63960 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:22:11.082778   63960 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.102 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-353455 NodeName:kubernetes-upgrade-353455 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:22:11.082981   63960 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-353455"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:22:11.083081   63960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:22:11.181053   63960 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:22:11.181145   63960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:22:11.227656   63960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0829 19:22:11.352976   63960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:22:11.486618   63960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0829 19:22:11.589619   63960 ssh_runner.go:195] Run: grep 192.168.50.102	control-plane.minikube.internal$ /etc/hosts
	I0829 19:22:11.609228   63960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:22:11.948411   63960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:22:11.985258   63960 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455 for IP: 192.168.50.102
	I0829 19:22:11.985287   63960 certs.go:194] generating shared ca certs ...
	I0829 19:22:11.985309   63960 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:22:11.985534   63960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:22:11.985616   63960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:22:11.985633   63960 certs.go:256] generating profile certs ...
	I0829 19:22:11.985768   63960 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/client.key
	I0829 19:22:11.985846   63960 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.key.d93ce222
	I0829 19:22:11.985899   63960 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.key
	I0829 19:22:11.986046   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:22:11.986117   63960 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:22:11.986131   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:22:11.986167   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:22:11.986214   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:22:11.986243   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:22:11.986311   63960 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:22:11.991503   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:22:12.046976   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:22:12.162255   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:22:12.211953   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:22:12.244188   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0829 19:22:12.272134   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:22:12.302541   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:22:12.334884   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kubernetes-upgrade-353455/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:22:12.394205   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:22:12.445842   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:22:12.499700   63960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:22:12.587552   63960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:22:12.654436   63960 ssh_runner.go:195] Run: openssl version
	I0829 19:22:12.670724   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:22:12.685960   63960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:12.690505   63960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:12.690567   63960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:22:12.698450   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:22:12.710185   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:22:12.723972   63960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:22:12.730197   63960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:22:12.730259   63960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:22:12.737837   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:22:12.748838   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:22:12.761782   63960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:22:12.766511   63960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:22:12.766573   63960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:22:12.772918   63960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:22:12.784575   63960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:22:12.788891   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:22:12.794996   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:22:12.800752   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:22:12.806803   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:22:12.812096   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:22:12.817499   63960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:22:12.823087   63960 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-353455 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-353455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:22:12.823187   63960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:22:12.823257   63960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:22:12.896542   63960 cri.go:89] found id: "c7e0f44fc45d81ca34fca662b700345a22ba250ea849d66ac39f08b6e8a456f2"
	I0829 19:22:12.896570   63960 cri.go:89] found id: "c8cd73d6720d1b99801804e1b45a1e66c095ecc42c32b8e31be70e94ba9c4ac6"
	I0829 19:22:12.896577   63960 cri.go:89] found id: "1a78648db20de6182de8323fe55bf66304a08df29615812edc781b6f181bda98"
	I0829 19:22:12.896582   63960 cri.go:89] found id: "d390a833f84a3199f7a1e4020b262916b76f50a210ff2ee2a9ab18fd2786fc5d"
	I0829 19:22:12.896604   63960 cri.go:89] found id: "212a7de66df56120f26702b7d4288eeb909fc5d28bef83eec75437e632a1cfa2"
	I0829 19:22:12.896609   63960 cri.go:89] found id: "cf1d139c8dd93ae59eb53ed1b75cacf6052fdfa08ab988f2a806088370223dd0"
	I0829 19:22:12.896615   63960 cri.go:89] found id: "008c80c30cf67f6babdc10990eef1bdff506ebd2c0b40298813292b0cc269ebd"
	I0829 19:22:12.896619   63960 cri.go:89] found id: "0cd01fd8b57cf8f4e4b611390b809d76c0d79dfe675a582f411a5b6853b0ac5c"
	I0829 19:22:12.896623   63960 cri.go:89] found id: "b089c64d036f0349d5af067696bc01f28fb421669b56528167c94d2f0fc02808"
	I0829 19:22:12.896632   63960 cri.go:89] found id: "36b3fb146d05a158f24dab08aa4d54f194eeeaa0402b864428388d48c52e1073"
	I0829 19:22:12.896640   63960 cri.go:89] found id: "285e5ee3c69a9ecc036b0e95fe246a25aff705e9f2394440563359ff587bada7"
	I0829 19:22:12.896644   63960 cri.go:89] found id: "f537dc7a1b4a4b62a16b3dad35ee2633093b730e986c2461d312b3c7cc39dc90"
	I0829 19:22:12.896651   63960 cri.go:89] found id: "e05767faee629c2756c35722878496839934351dda4ee2bd3838c2986c7fcf3e"
	I0829 19:22:12.896655   63960 cri.go:89] found id: "9b563857dc4d3fa049193ff55c4f1810290a0b471d1a76434f996f7cbbf2df86"
	I0829 19:22:12.896663   63960 cri.go:89] found id: "214cbd72e3eb481ba9580536acabdc6d3bb6bf3a248a6cac0ad64a5149a1b4eb"
	I0829 19:22:12.896670   63960 cri.go:89] found id: ""
	I0829 19:22:12.896756   63960 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.405903842Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e1eaf5c-68b5-41cf-8404-91533023c51e name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.407471500Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca776eee-4071-4082-ace6-8210c28a3b30 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.408335184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959365408305021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca776eee-4071-4082-ace6-8210c28a3b30 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.409168005Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05929d8d-fe0c-46fd-89f8-406e14a353f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.409226896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05929d8d-fe0c-46fd-89f8-406e14a353f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.409503642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be6d8aee29bb3a210c92a6c79e00e794572640a7f9dc5276a6aa5f704807ef74,PodSandboxId:203172b7233c7a3742fdf3dfc8d51a3d1cc02e8fc62a67361e21033edebacda4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959345491387056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ca4a7259d27aad2c7d5c558bfe9888aa96db259a47c7cc493361a00d15d2b4,PodSandboxId:540ac31f915eb4738fabb04232631bee190c6e6148fc01c027aa1924f95e285d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959345503050334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d14fe423e9e909bdfc400bdfb16a7b43ebab837f8c0f6ef0dcb0b4d7915b6d,PodSandboxId:2a2063a99ed5a573ee287d3cf52dad376b7700a4fbcf7b94bdb89c226f614e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959341688770284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c
6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98be6be7b34e6fb984f2f07d5f222f9d3debe1defe3b3aee2e7845c78157b54c,PodSandboxId:5bd0a718df47ee43c7063593c7e5a86f809ce20ba0a9b3f2eebd228dec9df161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959341684212802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84
5adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a67fe21c04365d3d8ba7d0a5f3397f7780920454baf7a3f83599274b161ed5f,PodSandboxId:eb294925da7af38ea4abbc32de45eb16ee3e8fb5f349a9a8c629307f52e16205,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959341685874527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c02e517bf1d23dc9b63
ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cfee9c3aaa27ce72560b55ca68c53625d142ca059ccfbb1f7b3dc8fa1354a4,PodSandboxId:c0f4cb9e5d4918ec16d55f5ba99dbd749f5c930b02bc91b6d46cd8da2cf52754,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959341663165115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4,PodSandboxId:5a42daab787dc418dc7b2880497775222aa635b2319040efd35fb1a55930609e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959337224710333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c,PodSandboxId:5acf290f0c22b4ed27e4f3fdc3cd293196fdf293f57834e413c74b6f6d9705f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959336464947747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97,PodSandboxId:ae8c40adc0854321b3d3d7bce36e45f6b534294189b93399e7a76d46c16d1141,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959336588911703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: ku
be-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee,PodSandboxId:bf74783da3e501f6af68a895c5de74ab234ecf9003762debea3bd61cb83a6155,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959336507313467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedul
er-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50,PodSandboxId:412f1d0875b3db1e919b7c88f980a2b8b04a8d2bc797067c67dadda91d4092bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959336418853283,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6,PodSandboxId:c8322732f9b18871d20b78944627d40b2325f6dd8e77ac0556f6825d33bf71a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959336252364253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0c02e517bf1d23dc9b63ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05929d8d-fe0c-46fd-89f8-406e14a353f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.460351731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=208310bd-fd94-41f7-8410-518c0b2fe557 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.460471206Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=208310bd-fd94-41f7-8410-518c0b2fe557 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.462324267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1144f478-5f84-4243-a8bd-1f13041589ed name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.463058662Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959365463025514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1144f478-5f84-4243-a8bd-1f13041589ed name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.463588188Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=88988823-c9c9-46bd-b3d4-060f479fb0c0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.463698079Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22b71b84-cd4f-4bab-982c-40067a822623 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.463931069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22b71b84-cd4f-4bab-982c-40067a822623 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.464068642Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:540ac31f915eb4738fabb04232631bee190c6e6148fc01c027aa1924f95e285d,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-rvxpb,Uid:670ff94f-8820-40e2-b7e1-2b4180f6ff93,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724959338651966282,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:21:55.589029396Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:203172b7233c7a3742fdf3dfc8d51a3d1cc02e8fc62a67361e21033edebacda4,Metadata:&PodSandboxMetadata{Name:kube-proxy-6xmsm,Uid:b54d05be-c00f-4fc4-b25f-126fc5e21687,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1724959338478082537,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:21:55.227265676Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5bd0a718df47ee43c7063593c7e5a86f809ce20ba0a9b3f2eebd228dec9df161,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-518621,Uid:845adfaa897323c582d0ae3d1493297e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724959338412150190,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845adfaa897323c582d0ae3d1493297e,tier: control-pla
ne,},Annotations:map[string]string{kubernetes.io/config.hash: 845adfaa897323c582d0ae3d1493297e,kubernetes.io/config.seen: 2024-08-29T19:21:50.582440042Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c0f4cb9e5d4918ec16d55f5ba99dbd749f5c930b02bc91b6d46cd8da2cf52754,Metadata:&PodSandboxMetadata{Name:etcd-pause-518621,Uid:3e4d9b9c749be9cbff73417887c5ae5d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724959338404211631,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.203:2379,kubernetes.io/config.hash: 3e4d9b9c749be9cbff73417887c5ae5d,kubernetes.io/config.seen: 2024-08-29T19:21:50.582435609Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2a2063a99ed5a573ee287d3cf52dad376
b7700a4fbcf7b94bdb89c226f614e38,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-518621,Uid:93b2de5558c6115b2b50b8e9c44c789d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724959338402163070,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c6115b2b50b8e9c44c789d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 93b2de5558c6115b2b50b8e9c44c789d,kubernetes.io/config.seen: 2024-08-29T19:21:50.582440835Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eb294925da7af38ea4abbc32de45eb16ee3e8fb5f349a9a8c629307f52e16205,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-518621,Uid:0c02e517bf1d23dc9b63ad994dac8382,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724959338263741736,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c02e517bf1d23dc9b63ad994dac8382,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.203:8443,kubernetes.io/config.hash: 0c02e517bf1d23dc9b63ad994dac8382,kubernetes.io/config.seen: 2024-08-29T19:21:50.582438999Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5a42daab787dc418dc7b2880497775222aa635b2319040efd35fb1a55930609e,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-rvxpb,Uid:670ff94f-8820-40e2-b7e1-2b4180f6ff93,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724959336026456853,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-08-29T19:21:55.589029396Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:412f1d0875b3db1e919b7c88f980a2b8b04a8d2bc797067c67dadda91d4092bc,Metadata:&PodSandboxMetadata{Name:etcd-pause-518621,Uid:3e4d9b9c749be9cbff73417887c5ae5d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724959335925247699,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.203:2379,kubernetes.io/config.hash: 3e4d9b9c749be9cbff73417887c5ae5d,kubernetes.io/config.seen: 2024-08-29T19:21:50.582435609Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bf74783da3e501f6af68a895c5de74ab234ecf9003762debea3bd61cb83a6155,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-518621,Uid:93b2de55
58c6115b2b50b8e9c44c789d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724959335920408658,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c6115b2b50b8e9c44c789d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 93b2de5558c6115b2b50b8e9c44c789d,kubernetes.io/config.seen: 2024-08-29T19:21:50.582440835Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c8322732f9b18871d20b78944627d40b2325f6dd8e77ac0556f6825d33bf71a9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-518621,Uid:0c02e517bf1d23dc9b63ad994dac8382,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724959335886118675,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 0c02e517bf1d23dc9b63ad994dac8382,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.203:8443,kubernetes.io/config.hash: 0c02e517bf1d23dc9b63ad994dac8382,kubernetes.io/config.seen: 2024-08-29T19:21:50.582438999Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ae8c40adc0854321b3d3d7bce36e45f6b534294189b93399e7a76d46c16d1141,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-518621,Uid:845adfaa897323c582d0ae3d1493297e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724959335881177036,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845adfaa897323c582d0ae3d1493297e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 845adfaa897323c582d0ae3d1493297e,kubernetes.io/config.seen: 202
4-08-29T19:21:50.582440042Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5acf290f0c22b4ed27e4f3fdc3cd293196fdf293f57834e413c74b6f6d9705f6,Metadata:&PodSandboxMetadata{Name:kube-proxy-6xmsm,Uid:b54d05be-c00f-4fc4-b25f-126fc5e21687,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724959335868710166,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:21:55.227265676Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8cb7ea02a8ea469ae470e82fb9f701e78fb761560dc45ba5216ffebf2afdc6af,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-x4hfc,Uid:a29eeba0-da21-4ed5-9a1f-c3dec86499b9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:17249593
15933571496,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-x4hfc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a29eeba0-da21-4ed5-9a1f-c3dec86499b9,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:21:55.615074053Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=88988823-c9c9-46bd-b3d4-060f479fb0c0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.464176972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be6d8aee29bb3a210c92a6c79e00e794572640a7f9dc5276a6aa5f704807ef74,PodSandboxId:203172b7233c7a3742fdf3dfc8d51a3d1cc02e8fc62a67361e21033edebacda4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959345491387056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ca4a7259d27aad2c7d5c558bfe9888aa96db259a47c7cc493361a00d15d2b4,PodSandboxId:540ac31f915eb4738fabb04232631bee190c6e6148fc01c027aa1924f95e285d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959345503050334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d14fe423e9e909bdfc400bdfb16a7b43ebab837f8c0f6ef0dcb0b4d7915b6d,PodSandboxId:2a2063a99ed5a573ee287d3cf52dad376b7700a4fbcf7b94bdb89c226f614e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959341688770284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c
6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98be6be7b34e6fb984f2f07d5f222f9d3debe1defe3b3aee2e7845c78157b54c,PodSandboxId:5bd0a718df47ee43c7063593c7e5a86f809ce20ba0a9b3f2eebd228dec9df161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959341684212802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84
5adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a67fe21c04365d3d8ba7d0a5f3397f7780920454baf7a3f83599274b161ed5f,PodSandboxId:eb294925da7af38ea4abbc32de45eb16ee3e8fb5f349a9a8c629307f52e16205,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959341685874527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c02e517bf1d23dc9b63
ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cfee9c3aaa27ce72560b55ca68c53625d142ca059ccfbb1f7b3dc8fa1354a4,PodSandboxId:c0f4cb9e5d4918ec16d55f5ba99dbd749f5c930b02bc91b6d46cd8da2cf52754,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959341663165115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4,PodSandboxId:5a42daab787dc418dc7b2880497775222aa635b2319040efd35fb1a55930609e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959337224710333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c,PodSandboxId:5acf290f0c22b4ed27e4f3fdc3cd293196fdf293f57834e413c74b6f6d9705f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959336464947747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97,PodSandboxId:ae8c40adc0854321b3d3d7bce36e45f6b534294189b93399e7a76d46c16d1141,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959336588911703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: ku
be-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee,PodSandboxId:bf74783da3e501f6af68a895c5de74ab234ecf9003762debea3bd61cb83a6155,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959336507313467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedul
er-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50,PodSandboxId:412f1d0875b3db1e919b7c88f980a2b8b04a8d2bc797067c67dadda91d4092bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959336418853283,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6,PodSandboxId:c8322732f9b18871d20b78944627d40b2325f6dd8e77ac0556f6825d33bf71a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959336252364253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0c02e517bf1d23dc9b63ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22b71b84-cd4f-4bab-982c-40067a822623 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.464824897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84f73c8a-79fd-4468-9d28-81bc629623c4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.465293648Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84f73c8a-79fd-4468-9d28-81bc629623c4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.465796442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be6d8aee29bb3a210c92a6c79e00e794572640a7f9dc5276a6aa5f704807ef74,PodSandboxId:203172b7233c7a3742fdf3dfc8d51a3d1cc02e8fc62a67361e21033edebacda4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959345491387056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ca4a7259d27aad2c7d5c558bfe9888aa96db259a47c7cc493361a00d15d2b4,PodSandboxId:540ac31f915eb4738fabb04232631bee190c6e6148fc01c027aa1924f95e285d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959345503050334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d14fe423e9e909bdfc400bdfb16a7b43ebab837f8c0f6ef0dcb0b4d7915b6d,PodSandboxId:2a2063a99ed5a573ee287d3cf52dad376b7700a4fbcf7b94bdb89c226f614e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959341688770284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c
6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98be6be7b34e6fb984f2f07d5f222f9d3debe1defe3b3aee2e7845c78157b54c,PodSandboxId:5bd0a718df47ee43c7063593c7e5a86f809ce20ba0a9b3f2eebd228dec9df161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959341684212802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84
5adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a67fe21c04365d3d8ba7d0a5f3397f7780920454baf7a3f83599274b161ed5f,PodSandboxId:eb294925da7af38ea4abbc32de45eb16ee3e8fb5f349a9a8c629307f52e16205,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959341685874527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c02e517bf1d23dc9b63
ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cfee9c3aaa27ce72560b55ca68c53625d142ca059ccfbb1f7b3dc8fa1354a4,PodSandboxId:c0f4cb9e5d4918ec16d55f5ba99dbd749f5c930b02bc91b6d46cd8da2cf52754,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959341663165115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4,PodSandboxId:5a42daab787dc418dc7b2880497775222aa635b2319040efd35fb1a55930609e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959337224710333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c,PodSandboxId:5acf290f0c22b4ed27e4f3fdc3cd293196fdf293f57834e413c74b6f6d9705f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959336464947747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97,PodSandboxId:ae8c40adc0854321b3d3d7bce36e45f6b534294189b93399e7a76d46c16d1141,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959336588911703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: ku
be-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee,PodSandboxId:bf74783da3e501f6af68a895c5de74ab234ecf9003762debea3bd61cb83a6155,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959336507313467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedul
er-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50,PodSandboxId:412f1d0875b3db1e919b7c88f980a2b8b04a8d2bc797067c67dadda91d4092bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959336418853283,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6,PodSandboxId:c8322732f9b18871d20b78944627d40b2325f6dd8e77ac0556f6825d33bf71a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959336252364253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0c02e517bf1d23dc9b63ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84f73c8a-79fd-4468-9d28-81bc629623c4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.511460112Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec4c8403-2a5b-46f2-938e-eeafe2f8d55d name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.511552658Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec4c8403-2a5b-46f2-938e-eeafe2f8d55d name=/runtime.v1.RuntimeService/Version
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.513112663Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6b331c8-4d99-4745-a411-8afa67772645 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.513489745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959365513466858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6b331c8-4d99-4745-a411-8afa67772645 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.514105953Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3755ecf6-5c30-4f91-8964-1ea6b0acdb03 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.514175178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3755ecf6-5c30-4f91-8964-1ea6b0acdb03 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:22:45 pause-518621 crio[2828]: time="2024-08-29 19:22:45.514421419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be6d8aee29bb3a210c92a6c79e00e794572640a7f9dc5276a6aa5f704807ef74,PodSandboxId:203172b7233c7a3742fdf3dfc8d51a3d1cc02e8fc62a67361e21033edebacda4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724959345491387056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ca4a7259d27aad2c7d5c558bfe9888aa96db259a47c7cc493361a00d15d2b4,PodSandboxId:540ac31f915eb4738fabb04232631bee190c6e6148fc01c027aa1924f95e285d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724959345503050334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d14fe423e9e909bdfc400bdfb16a7b43ebab837f8c0f6ef0dcb0b4d7915b6d,PodSandboxId:2a2063a99ed5a573ee287d3cf52dad376b7700a4fbcf7b94bdb89c226f614e38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724959341688770284,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c
6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98be6be7b34e6fb984f2f07d5f222f9d3debe1defe3b3aee2e7845c78157b54c,PodSandboxId:5bd0a718df47ee43c7063593c7e5a86f809ce20ba0a9b3f2eebd228dec9df161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724959341684212802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84
5adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a67fe21c04365d3d8ba7d0a5f3397f7780920454baf7a3f83599274b161ed5f,PodSandboxId:eb294925da7af38ea4abbc32de45eb16ee3e8fb5f349a9a8c629307f52e16205,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724959341685874527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c02e517bf1d23dc9b63
ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cfee9c3aaa27ce72560b55ca68c53625d142ca059ccfbb1f7b3dc8fa1354a4,PodSandboxId:c0f4cb9e5d4918ec16d55f5ba99dbd749f5c930b02bc91b6d46cd8da2cf52754,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724959341663165115,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4,PodSandboxId:5a42daab787dc418dc7b2880497775222aa635b2319040efd35fb1a55930609e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724959337224710333,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rvxpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 670ff94f-8820-40e2-b7e1-2b4180f6ff93,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c,PodSandboxId:5acf290f0c22b4ed27e4f3fdc3cd293196fdf293f57834e413c74b6f6d9705f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724959336464947747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-6xmsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54d05be-c00f-4fc4-b25f-126fc5e21687,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97,PodSandboxId:ae8c40adc0854321b3d3d7bce36e45f6b534294189b93399e7a76d46c16d1141,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724959336588911703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: ku
be-controller-manager-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 845adfaa897323c582d0ae3d1493297e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee,PodSandboxId:bf74783da3e501f6af68a895c5de74ab234ecf9003762debea3bd61cb83a6155,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724959336507313467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-schedul
er-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b2de5558c6115b2b50b8e9c44c789d,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50,PodSandboxId:412f1d0875b3db1e919b7c88f980a2b8b04a8d2bc797067c67dadda91d4092bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724959336418853283,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-518621,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3e4d9b9c749be9cbff73417887c5ae5d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6,PodSandboxId:c8322732f9b18871d20b78944627d40b2325f6dd8e77ac0556f6825d33bf71a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724959336252364253,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-518621,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 0c02e517bf1d23dc9b63ad994dac8382,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3755ecf6-5c30-4f91-8964-1ea6b0acdb03 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	56ca4a7259d27       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Running             coredns                   2                   540ac31f915eb       coredns-6f6b679f8f-rvxpb
	be6d8aee29bb3       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   20 seconds ago      Running             kube-proxy                2                   203172b7233c7       kube-proxy-6xmsm
	77d14fe423e9e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   23 seconds ago      Running             kube-scheduler            2                   2a2063a99ed5a       kube-scheduler-pause-518621
	4a67fe21c0436       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   23 seconds ago      Running             kube-apiserver            2                   eb294925da7af       kube-apiserver-pause-518621
	98be6be7b34e6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   23 seconds ago      Running             kube-controller-manager   2                   5bd0a718df47e       kube-controller-manager-pause-518621
	c9cfee9c3aaa2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago      Running             etcd                      2                   c0f4cb9e5d491       etcd-pause-518621
	a306ba179d404       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   28 seconds ago      Exited              coredns                   1                   5a42daab787dc       coredns-6f6b679f8f-rvxpb
	bdf90b489bfec       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   29 seconds ago      Exited              kube-controller-manager   1                   ae8c40adc0854       kube-controller-manager-pause-518621
	cf17176a7ba5d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   29 seconds ago      Exited              kube-scheduler            1                   bf74783da3e50       kube-scheduler-pause-518621
	03cf670122c35       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   29 seconds ago      Exited              kube-proxy                1                   5acf290f0c22b       kube-proxy-6xmsm
	fb392b4b0fb0a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   29 seconds ago      Exited              etcd                      1                   412f1d0875b3d       etcd-pause-518621
	791dada70b1ab       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   29 seconds ago      Exited              kube-apiserver            1                   c8322732f9b18       kube-apiserver-pause-518621
	
	
	==> coredns [56ca4a7259d27aad2c7d5c558bfe9888aa96db259a47c7cc493361a00d15d2b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48846 - 36555 "HINFO IN 8690049870560730438.6411881968822899713. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011062256s
	
	
	==> coredns [a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4] <==
	
	
	==> describe nodes <==
	Name:               pause-518621
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-518621
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=pause-518621
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_21_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:21:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-518621
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:22:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:22:25 +0000   Thu, 29 Aug 2024 19:21:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:22:25 +0000   Thu, 29 Aug 2024 19:21:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:22:25 +0000   Thu, 29 Aug 2024 19:21:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:22:25 +0000   Thu, 29 Aug 2024 19:21:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.203
	  Hostname:    pause-518621
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ac65647b7ce4003907f5a59eb0e8c1c
	  System UUID:                3ac65647-b7ce-4003-907f-5a59eb0e8c1c
	  Boot ID:                    c3052a04-7f28-42d8-9c14-06eda5fd4094
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-rvxpb                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     50s
	  kube-system                 etcd-pause-518621                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         55s
	  kube-system                 kube-apiserver-pause-518621             250m (12%)    0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-controller-manager-pause-518621    200m (10%)    0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-proxy-6xmsm                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-scheduler-pause-518621             100m (5%)     0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 49s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientPID     55s                kubelet          Node pause-518621 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  55s                kubelet          Node pause-518621 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s                kubelet          Node pause-518621 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeReady                54s                kubelet          Node pause-518621 status is now: NodeReady
	  Normal  RegisteredNode           51s                node-controller  Node pause-518621 event: Registered Node pause-518621 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-518621 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-518621 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-518621 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-518621 event: Registered Node pause-518621 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.786739] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.056321] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058507] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.162537] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.150876] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.287587] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +4.053036] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +4.008855] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.067204] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.510735] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[  +0.079653] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.170552] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.217064] systemd-fstab-generator[1412]: Ignoring "noauto" option for root device
	[Aug29 19:22] kauditd_printk_skb: 98 callbacks suppressed
	[ +11.702933] systemd-fstab-generator[2398]: Ignoring "noauto" option for root device
	[  +0.335457] systemd-fstab-generator[2513]: Ignoring "noauto" option for root device
	[  +0.333918] systemd-fstab-generator[2639]: Ignoring "noauto" option for root device
	[  +0.288285] systemd-fstab-generator[2720]: Ignoring "noauto" option for root device
	[  +0.458354] systemd-fstab-generator[2818]: Ignoring "noauto" option for root device
	[  +1.526222] systemd-fstab-generator[3390]: Ignoring "noauto" option for root device
	[  +1.829605] systemd-fstab-generator[3508]: Ignoring "noauto" option for root device
	[  +0.085327] kauditd_printk_skb: 244 callbacks suppressed
	[  +7.653546] kauditd_printk_skb: 50 callbacks suppressed
	[ +10.477321] systemd-fstab-generator[3949]: Ignoring "noauto" option for root device
	
	
	==> etcd [c9cfee9c3aaa27ce72560b55ca68c53625d142ca059ccfbb1f7b3dc8fa1354a4] <==
	{"level":"info","ts":"2024-08-29T19:22:22.014415Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","added-peer-id":"3dce464254b32e20","added-peer-peer-urls":["https://192.168.61.203:2380"]}
	{"level":"info","ts":"2024-08-29T19:22:22.014537Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:22:22.014582Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:22:22.020133Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:22.027325Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T19:22:22.028763Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3dce464254b32e20","initial-advertise-peer-urls":["https://192.168.61.203:2380"],"listen-peer-urls":["https://192.168.61.203:2380"],"advertise-client-urls":["https://192.168.61.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:22:22.030689Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:22:22.030777Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-08-29T19:22:22.030804Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-08-29T19:22:23.679681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-29T19:22:23.679746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-29T19:22:23.679893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgPreVoteResp from 3dce464254b32e20 at term 2"}
	{"level":"info","ts":"2024-08-29T19:22:23.679913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became candidate at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:23.679920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgVoteResp from 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:23.679948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became leader at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:23.679957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dce464254b32e20 elected leader 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2024-08-29T19:22:23.686257Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3dce464254b32e20","local-member-attributes":"{Name:pause-518621 ClientURLs:[https://192.168.61.203:2379]}","request-path":"/0/members/3dce464254b32e20/attributes","cluster-id":"817eda555b894faf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:22:23.686259Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:22:23.686490Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:22:23.686950Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:22:23.687022Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T19:22:23.687599Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:23.687980Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:23.688456Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.203:2379"}
	{"level":"info","ts":"2024-08-29T19:22:23.689317Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50] <==
	{"level":"info","ts":"2024-08-29T19:22:16.974039Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-29T19:22:17.015500Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","commit-index":407}
	{"level":"info","ts":"2024-08-29T19:22:17.015823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-29T19:22:17.016080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became follower at term 2"}
	{"level":"info","ts":"2024-08-29T19:22:17.016179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 3dce464254b32e20 [peers: [], term: 2, commit: 407, applied: 0, lastindex: 407, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-29T19:22:17.022228Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-29T19:22:17.028434Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":393}
	{"level":"info","ts":"2024-08-29T19:22:17.031743Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-29T19:22:17.038477Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"3dce464254b32e20","timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:22:17.041511Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"3dce464254b32e20"}
	{"level":"info","ts":"2024-08-29T19:22:17.041599Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"3dce464254b32e20","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-29T19:22:17.041896Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-29T19:22:17.043315Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:22:17.056125Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:22:17.056345Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:22:17.056421Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T19:22:17.056788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 switched to configuration voters=(4453574332218813984)"}
	{"level":"info","ts":"2024-08-29T19:22:17.058353Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","added-peer-id":"3dce464254b32e20","added-peer-peer-urls":["https://192.168.61.203:2380"]}
	{"level":"info","ts":"2024-08-29T19:22:17.061023Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:22:17.061124Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:22:17.066286Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T19:22:17.066593Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3dce464254b32e20","initial-advertise-peer-urls":["https://192.168.61.203:2380"],"listen-peer-urls":["https://192.168.61.203:2380"],"advertise-client-urls":["https://192.168.61.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:22:17.068526Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:22:17.066418Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-08-29T19:22:17.072883Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.203:2380"}
	
	
	==> kernel <==
	 19:22:45 up 1 min,  0 users,  load average: 2.27, 0.80, 0.28
	Linux pause-518621 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4a67fe21c04365d3d8ba7d0a5f3397f7780920454baf7a3f83599274b161ed5f] <==
	I0829 19:22:25.072737       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0829 19:22:25.073146       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0829 19:22:25.077721       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 19:22:25.077780       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 19:22:25.078039       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0829 19:22:25.078155       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 19:22:25.086697       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0829 19:22:25.089050       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0829 19:22:25.095141       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0829 19:22:25.095601       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 19:22:25.095693       1 policy_source.go:224] refreshing policies
	I0829 19:22:25.095757       1 aggregator.go:171] initial CRD sync complete...
	I0829 19:22:25.095785       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 19:22:25.095807       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 19:22:25.095828       1 cache.go:39] Caches are synced for autoregister controller
	I0829 19:22:25.097781       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0829 19:22:25.108843       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0829 19:22:25.975853       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0829 19:22:26.367436       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 19:22:26.412589       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 19:22:26.454382       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 19:22:26.484185       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0829 19:22:26.491529       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0829 19:22:28.525405       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 19:22:28.678545       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6] <==
	I0829 19:22:17.222325       1 options.go:228] external host was not specified, using 192.168.61.203
	I0829 19:22:17.248563       1 server.go:142] Version: v1.31.0
	I0829 19:22:17.248724       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [98be6be7b34e6fb984f2f07d5f222f9d3debe1defe3b3aee2e7845c78157b54c] <==
	I0829 19:22:28.373143       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0829 19:22:28.373188       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0829 19:22:28.373238       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0829 19:22:28.373304       1 shared_informer.go:320] Caches are synced for PV protection
	I0829 19:22:28.373352       1 shared_informer.go:320] Caches are synced for GC
	I0829 19:22:28.373408       1 shared_informer.go:320] Caches are synced for ephemeral
	I0829 19:22:28.378418       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0829 19:22:28.384493       1 shared_informer.go:320] Caches are synced for crt configmap
	I0829 19:22:28.388222       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0829 19:22:28.421470       1 shared_informer.go:320] Caches are synced for daemon sets
	I0829 19:22:28.427949       1 shared_informer.go:320] Caches are synced for stateful set
	I0829 19:22:28.518463       1 shared_informer.go:320] Caches are synced for service account
	I0829 19:22:28.527962       1 shared_informer.go:320] Caches are synced for namespace
	I0829 19:22:28.528337       1 shared_informer.go:320] Caches are synced for HPA
	I0829 19:22:28.531881       1 shared_informer.go:320] Caches are synced for disruption
	I0829 19:22:28.550782       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0829 19:22:28.575459       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 19:22:28.579419       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 19:22:28.588446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="264.793771ms"
	I0829 19:22:28.588807       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="77.754µs"
	I0829 19:22:29.008752       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 19:22:29.022479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 19:22:29.022591       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0829 19:22:34.468323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="14.280917ms"
	I0829 19:22:34.468722       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="50.303µs"
	
	
	==> kube-controller-manager [bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97] <==
	
	
	==> kube-proxy [03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c] <==
	
	
	==> kube-proxy [be6d8aee29bb3a210c92a6c79e00e794572640a7f9dc5276a6aa5f704807ef74] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:22:25.699377       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:22:25.706422       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.203"]
	E0829 19:22:25.706502       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:22:25.738218       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:22:25.738269       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:22:25.738292       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:22:25.740570       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:22:25.740912       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:22:25.740934       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:22:25.742051       1 config.go:197] "Starting service config controller"
	I0829 19:22:25.742095       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:22:25.742116       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:22:25.742120       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:22:25.742598       1 config.go:326] "Starting node config controller"
	I0829 19:22:25.742673       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:22:25.842832       1 shared_informer.go:320] Caches are synced for node config
	I0829 19:22:25.842912       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:22:25.842857       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [77d14fe423e9e909bdfc400bdfb16a7b43ebab837f8c0f6ef0dcb0b4d7915b6d] <==
	I0829 19:22:22.500825       1 serving.go:386] Generated self-signed cert in-memory
	W0829 19:22:25.015960       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 19:22:25.016167       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 19:22:25.016200       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 19:22:25.016270       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 19:22:25.101403       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 19:22:25.104863       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:22:25.108601       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 19:22:25.108808       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 19:22:25.108869       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 19:22:25.113456       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 19:22:25.215683       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee] <==
	
	
	==> kubelet <==
	Aug 29 19:22:21 pause-518621 kubelet[3515]: I0829 19:22:21.414216    3515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/845adfaa897323c582d0ae3d1493297e-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-518621\" (UID: \"845adfaa897323c582d0ae3d1493297e\") " pod="kube-system/kube-controller-manager-pause-518621"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: E0829 19:22:21.414846    3515 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-518621?timeout=10s\": dial tcp 192.168.61.203:8443: connect: connection refused" interval="400ms"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: I0829 19:22:21.609435    3515 kubelet_node_status.go:72] "Attempting to register node" node="pause-518621"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: E0829 19:22:21.610430    3515 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.203:8443: connect: connection refused" node="pause-518621"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: I0829 19:22:21.635814    3515 scope.go:117] "RemoveContainer" containerID="fb392b4b0fb0adafd3bb3017b10c0b4e28e348e69b06fd340c05daadc4f61d50"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: I0829 19:22:21.637726    3515 scope.go:117] "RemoveContainer" containerID="791dada70b1abb23063d357a07ab3f31963f6a0c27673fb9893ba6783d3255f6"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: I0829 19:22:21.639963    3515 scope.go:117] "RemoveContainer" containerID="bdf90b489bfec1aecf40f221f476e9e45dd852255b224d827df373f1a6ccfb97"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: I0829 19:22:21.640292    3515 scope.go:117] "RemoveContainer" containerID="cf17176a7ba5d0020d741cf843ccdcbd4ef0cbaa1cba624907b660a9d5edbdee"
	Aug 29 19:22:21 pause-518621 kubelet[3515]: E0829 19:22:21.817145    3515 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-518621?timeout=10s\": dial tcp 192.168.61.203:8443: connect: connection refused" interval="800ms"
	Aug 29 19:22:22 pause-518621 kubelet[3515]: I0829 19:22:22.011517    3515 kubelet_node_status.go:72] "Attempting to register node" node="pause-518621"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.167542    3515 apiserver.go:52] "Watching apiserver"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.212874    3515 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.213848    3515 kubelet_node_status.go:111] "Node was previously registered" node="pause-518621"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.213963    3515 kubelet_node_status.go:75] "Successfully registered node" node="pause-518621"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.214016    3515 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.216037    3515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.296928    3515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b54d05be-c00f-4fc4-b25f-126fc5e21687-xtables-lock\") pod \"kube-proxy-6xmsm\" (UID: \"b54d05be-c00f-4fc4-b25f-126fc5e21687\") " pod="kube-system/kube-proxy-6xmsm"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.296973    3515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b54d05be-c00f-4fc4-b25f-126fc5e21687-lib-modules\") pod \"kube-proxy-6xmsm\" (UID: \"b54d05be-c00f-4fc4-b25f-126fc5e21687\") " pod="kube-system/kube-proxy-6xmsm"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.473266    3515 scope.go:117] "RemoveContainer" containerID="03cf670122c358b3265d3b3b83c5eee7ac51f89067f86f5bb94c15895c2f8e8c"
	Aug 29 19:22:25 pause-518621 kubelet[3515]: I0829 19:22:25.473520    3515 scope.go:117] "RemoveContainer" containerID="a306ba179d40483cbc9ba8605ad0453affde46b455e0f2197b63ef6669b672b4"
	Aug 29 19:22:31 pause-518621 kubelet[3515]: E0829 19:22:31.309801    3515 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959351309460939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:22:31 pause-518621 kubelet[3515]: E0829 19:22:31.309847    3515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959351309460939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:22:34 pause-518621 kubelet[3515]: I0829 19:22:34.438766    3515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 29 19:22:41 pause-518621 kubelet[3515]: E0829 19:22:41.311579    3515 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959361310811394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:22:41 pause-518621 kubelet[3515]: E0829 19:22:41.312366    3515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724959361310811394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:22:45.063371   64697 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19531-13056/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-518621 -n pause-518621
helpers_test.go:261: (dbg) Run:  kubectl --context pause-518621 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (40.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (283.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-467349 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-467349 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m43.059454472s)

                                                
                                                
-- stdout --
	* [old-k8s-version-467349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-467349" primary control-plane node in "old-k8s-version-467349" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:25:25.473814   72476 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:25:25.473932   72476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:25:25.473941   72476 out.go:358] Setting ErrFile to fd 2...
	I0829 19:25:25.473945   72476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:25:25.474120   72476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:25:25.474664   72476 out.go:352] Setting JSON to false
	I0829 19:25:25.475718   72476 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7672,"bootTime":1724951853,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:25:25.475773   72476 start.go:139] virtualization: kvm guest
	I0829 19:25:25.477851   72476 out.go:177] * [old-k8s-version-467349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:25:25.479013   72476 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:25:25.479033   72476 notify.go:220] Checking for updates...
	I0829 19:25:25.481179   72476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:25:25.482174   72476 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:25:25.483248   72476 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:25:25.484325   72476 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:25:25.485263   72476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:25:25.486637   72476 config.go:182] Loaded profile config "bridge-633326": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:25:25.486742   72476 config.go:182] Loaded profile config "enable-default-cni-633326": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:25:25.486848   72476 config.go:182] Loaded profile config "flannel-633326": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:25:25.486955   72476 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:25:25.523175   72476 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 19:25:25.524293   72476 start.go:297] selected driver: kvm2
	I0829 19:25:25.524305   72476 start.go:901] validating driver "kvm2" against <nil>
	I0829 19:25:25.524323   72476 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:25:25.524962   72476 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:25:25.525024   72476 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:25:25.540470   72476 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:25:25.540520   72476 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 19:25:25.540796   72476 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:25:25.540834   72476 cni.go:84] Creating CNI manager for ""
	I0829 19:25:25.540845   72476 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:25:25.540860   72476 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 19:25:25.540921   72476 start.go:340] cluster config:
	{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:25:25.541063   72476 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:25:25.542774   72476 out.go:177] * Starting "old-k8s-version-467349" primary control-plane node in "old-k8s-version-467349" cluster
	I0829 19:25:25.543836   72476 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:25:25.543872   72476 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:25:25.543881   72476 cache.go:56] Caching tarball of preloaded images
	I0829 19:25:25.543949   72476 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:25:25.543959   72476 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0829 19:25:25.544041   72476 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:25:25.544063   72476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json: {Name:mk1a8603f6895464bc50ee43f2d31972c78e60ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:25:25.544208   72476 start.go:360] acquireMachinesLock for old-k8s-version-467349: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:25:36.946969   72476 start.go:364] duration metric: took 11.40273002s to acquireMachinesLock for "old-k8s-version-467349"
	I0829 19:25:36.947035   72476 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:25:36.947160   72476 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 19:25:36.949166   72476 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 19:25:36.949386   72476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:25:36.949443   72476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:25:36.966686   72476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
	I0829 19:25:36.967141   72476 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:25:36.967812   72476 main.go:141] libmachine: Using API Version  1
	I0829 19:25:36.967848   72476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:25:36.968255   72476 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:25:36.968464   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:25:36.968628   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:25:36.968774   72476 start.go:159] libmachine.API.Create for "old-k8s-version-467349" (driver="kvm2")
	I0829 19:25:36.968806   72476 client.go:168] LocalClient.Create starting
	I0829 19:25:36.968845   72476 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem
	I0829 19:25:36.968877   72476 main.go:141] libmachine: Decoding PEM data...
	I0829 19:25:36.968907   72476 main.go:141] libmachine: Parsing certificate...
	I0829 19:25:36.968973   72476 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem
	I0829 19:25:36.968997   72476 main.go:141] libmachine: Decoding PEM data...
	I0829 19:25:36.969008   72476 main.go:141] libmachine: Parsing certificate...
	I0829 19:25:36.969030   72476 main.go:141] libmachine: Running pre-create checks...
	I0829 19:25:36.969039   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .PreCreateCheck
	I0829 19:25:36.969421   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetConfigRaw
	I0829 19:25:36.969819   72476 main.go:141] libmachine: Creating machine...
	I0829 19:25:36.969833   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .Create
	I0829 19:25:36.970041   72476 main.go:141] libmachine: (old-k8s-version-467349) Creating KVM machine...
	I0829 19:25:36.971496   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found existing default KVM network
	I0829 19:25:36.972784   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:36.972609   73485 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:1c:4e:ba} reservation:<nil>}
	I0829 19:25:36.973833   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:36.973734   73485 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:27:29:78} reservation:<nil>}
	I0829 19:25:36.974783   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:36.974683   73485 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:0e:7b:af} reservation:<nil>}
	I0829 19:25:36.976039   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:36.975918   73485 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00037f170}
	I0829 19:25:36.976099   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | created network xml: 
	I0829 19:25:36.976120   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | <network>
	I0829 19:25:36.976140   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG |   <name>mk-old-k8s-version-467349</name>
	I0829 19:25:36.976155   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG |   <dns enable='no'/>
	I0829 19:25:36.976168   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG |   
	I0829 19:25:36.976186   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0829 19:25:36.976200   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG |     <dhcp>
	I0829 19:25:36.976212   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0829 19:25:36.976230   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG |     </dhcp>
	I0829 19:25:36.976245   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG |   </ip>
	I0829 19:25:36.976256   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG |   
	I0829 19:25:36.976272   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | </network>
	I0829 19:25:36.976317   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | 
	I0829 19:25:36.981242   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | trying to create private KVM network mk-old-k8s-version-467349 192.168.72.0/24...
	I0829 19:25:37.066247   72476 main.go:141] libmachine: (old-k8s-version-467349) Setting up store path in /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349 ...
	I0829 19:25:37.066366   72476 main.go:141] libmachine: (old-k8s-version-467349) Building disk image from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 19:25:37.066379   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | private KVM network mk-old-k8s-version-467349 192.168.72.0/24 created
	I0829 19:25:37.066398   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:37.064896   73485 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:25:37.066417   72476 main.go:141] libmachine: (old-k8s-version-467349) Downloading /home/jenkins/minikube-integration/19531-13056/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 19:25:37.354496   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:37.354256   73485 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa...
	I0829 19:25:37.399265   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:37.399126   73485 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/old-k8s-version-467349.rawdisk...
	I0829 19:25:37.399320   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Writing magic tar header
	I0829 19:25:37.399366   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Writing SSH key tar header
	I0829 19:25:37.399403   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:37.399251   73485 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349 ...
	I0829 19:25:37.399422   72476 main.go:141] libmachine: (old-k8s-version-467349) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349 (perms=drwx------)
	I0829 19:25:37.399453   72476 main.go:141] libmachine: (old-k8s-version-467349) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines (perms=drwxr-xr-x)
	I0829 19:25:37.399469   72476 main.go:141] libmachine: (old-k8s-version-467349) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube (perms=drwxr-xr-x)
	I0829 19:25:37.399493   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349
	I0829 19:25:37.399524   72476 main.go:141] libmachine: (old-k8s-version-467349) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056 (perms=drwxrwxr-x)
	I0829 19:25:37.399539   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines
	I0829 19:25:37.399552   72476 main.go:141] libmachine: (old-k8s-version-467349) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 19:25:37.399567   72476 main.go:141] libmachine: (old-k8s-version-467349) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 19:25:37.399583   72476 main.go:141] libmachine: (old-k8s-version-467349) Creating domain...
	I0829 19:25:37.399608   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:25:37.399626   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056
	I0829 19:25:37.399643   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 19:25:37.399657   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Checking permissions on dir: /home/jenkins
	I0829 19:25:37.399667   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Checking permissions on dir: /home
	I0829 19:25:37.399680   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Skipping /home - not owner
	I0829 19:25:37.400641   72476 main.go:141] libmachine: (old-k8s-version-467349) define libvirt domain using xml: 
	I0829 19:25:37.400666   72476 main.go:141] libmachine: (old-k8s-version-467349) <domain type='kvm'>
	I0829 19:25:37.400677   72476 main.go:141] libmachine: (old-k8s-version-467349)   <name>old-k8s-version-467349</name>
	I0829 19:25:37.400685   72476 main.go:141] libmachine: (old-k8s-version-467349)   <memory unit='MiB'>2200</memory>
	I0829 19:25:37.400707   72476 main.go:141] libmachine: (old-k8s-version-467349)   <vcpu>2</vcpu>
	I0829 19:25:37.400718   72476 main.go:141] libmachine: (old-k8s-version-467349)   <features>
	I0829 19:25:37.400728   72476 main.go:141] libmachine: (old-k8s-version-467349)     <acpi/>
	I0829 19:25:37.400738   72476 main.go:141] libmachine: (old-k8s-version-467349)     <apic/>
	I0829 19:25:37.400746   72476 main.go:141] libmachine: (old-k8s-version-467349)     <pae/>
	I0829 19:25:37.400753   72476 main.go:141] libmachine: (old-k8s-version-467349)     
	I0829 19:25:37.400762   72476 main.go:141] libmachine: (old-k8s-version-467349)   </features>
	I0829 19:25:37.400768   72476 main.go:141] libmachine: (old-k8s-version-467349)   <cpu mode='host-passthrough'>
	I0829 19:25:37.400776   72476 main.go:141] libmachine: (old-k8s-version-467349)   
	I0829 19:25:37.400785   72476 main.go:141] libmachine: (old-k8s-version-467349)   </cpu>
	I0829 19:25:37.400793   72476 main.go:141] libmachine: (old-k8s-version-467349)   <os>
	I0829 19:25:37.400812   72476 main.go:141] libmachine: (old-k8s-version-467349)     <type>hvm</type>
	I0829 19:25:37.400823   72476 main.go:141] libmachine: (old-k8s-version-467349)     <boot dev='cdrom'/>
	I0829 19:25:37.400831   72476 main.go:141] libmachine: (old-k8s-version-467349)     <boot dev='hd'/>
	I0829 19:25:37.400847   72476 main.go:141] libmachine: (old-k8s-version-467349)     <bootmenu enable='no'/>
	I0829 19:25:37.400857   72476 main.go:141] libmachine: (old-k8s-version-467349)   </os>
	I0829 19:25:37.400869   72476 main.go:141] libmachine: (old-k8s-version-467349)   <devices>
	I0829 19:25:37.400880   72476 main.go:141] libmachine: (old-k8s-version-467349)     <disk type='file' device='cdrom'>
	I0829 19:25:37.400894   72476 main.go:141] libmachine: (old-k8s-version-467349)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/boot2docker.iso'/>
	I0829 19:25:37.400906   72476 main.go:141] libmachine: (old-k8s-version-467349)       <target dev='hdc' bus='scsi'/>
	I0829 19:25:37.400923   72476 main.go:141] libmachine: (old-k8s-version-467349)       <readonly/>
	I0829 19:25:37.400933   72476 main.go:141] libmachine: (old-k8s-version-467349)     </disk>
	I0829 19:25:37.400946   72476 main.go:141] libmachine: (old-k8s-version-467349)     <disk type='file' device='disk'>
	I0829 19:25:37.400958   72476 main.go:141] libmachine: (old-k8s-version-467349)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 19:25:37.400971   72476 main.go:141] libmachine: (old-k8s-version-467349)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/old-k8s-version-467349.rawdisk'/>
	I0829 19:25:37.400987   72476 main.go:141] libmachine: (old-k8s-version-467349)       <target dev='hda' bus='virtio'/>
	I0829 19:25:37.400999   72476 main.go:141] libmachine: (old-k8s-version-467349)     </disk>
	I0829 19:25:37.401006   72476 main.go:141] libmachine: (old-k8s-version-467349)     <interface type='network'>
	I0829 19:25:37.401019   72476 main.go:141] libmachine: (old-k8s-version-467349)       <source network='mk-old-k8s-version-467349'/>
	I0829 19:25:37.401027   72476 main.go:141] libmachine: (old-k8s-version-467349)       <model type='virtio'/>
	I0829 19:25:37.401054   72476 main.go:141] libmachine: (old-k8s-version-467349)     </interface>
	I0829 19:25:37.401072   72476 main.go:141] libmachine: (old-k8s-version-467349)     <interface type='network'>
	I0829 19:25:37.401085   72476 main.go:141] libmachine: (old-k8s-version-467349)       <source network='default'/>
	I0829 19:25:37.401095   72476 main.go:141] libmachine: (old-k8s-version-467349)       <model type='virtio'/>
	I0829 19:25:37.401102   72476 main.go:141] libmachine: (old-k8s-version-467349)     </interface>
	I0829 19:25:37.401108   72476 main.go:141] libmachine: (old-k8s-version-467349)     <serial type='pty'>
	I0829 19:25:37.401116   72476 main.go:141] libmachine: (old-k8s-version-467349)       <target port='0'/>
	I0829 19:25:37.401126   72476 main.go:141] libmachine: (old-k8s-version-467349)     </serial>
	I0829 19:25:37.401135   72476 main.go:141] libmachine: (old-k8s-version-467349)     <console type='pty'>
	I0829 19:25:37.401150   72476 main.go:141] libmachine: (old-k8s-version-467349)       <target type='serial' port='0'/>
	I0829 19:25:37.401162   72476 main.go:141] libmachine: (old-k8s-version-467349)     </console>
	I0829 19:25:37.401174   72476 main.go:141] libmachine: (old-k8s-version-467349)     <rng model='virtio'>
	I0829 19:25:37.401189   72476 main.go:141] libmachine: (old-k8s-version-467349)       <backend model='random'>/dev/random</backend>
	I0829 19:25:37.401199   72476 main.go:141] libmachine: (old-k8s-version-467349)     </rng>
	I0829 19:25:37.401208   72476 main.go:141] libmachine: (old-k8s-version-467349)     
	I0829 19:25:37.401221   72476 main.go:141] libmachine: (old-k8s-version-467349)     
	I0829 19:25:37.401237   72476 main.go:141] libmachine: (old-k8s-version-467349)   </devices>
	I0829 19:25:37.401248   72476 main.go:141] libmachine: (old-k8s-version-467349) </domain>
	I0829 19:25:37.401258   72476 main.go:141] libmachine: (old-k8s-version-467349) 
	I0829 19:25:37.406022   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:7b:b9:5d in network default
	I0829 19:25:37.406557   72476 main.go:141] libmachine: (old-k8s-version-467349) Ensuring networks are active...
	I0829 19:25:37.406584   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:37.407194   72476 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network default is active
	I0829 19:25:37.407502   72476 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network mk-old-k8s-version-467349 is active
	I0829 19:25:37.408007   72476 main.go:141] libmachine: (old-k8s-version-467349) Getting domain xml...
	I0829 19:25:37.408644   72476 main.go:141] libmachine: (old-k8s-version-467349) Creating domain...
	I0829 19:25:38.914162   72476 main.go:141] libmachine: (old-k8s-version-467349) Waiting to get IP...
	I0829 19:25:38.916497   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:38.917155   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:38.917178   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:38.917057   73485 retry.go:31] will retry after 197.853407ms: waiting for machine to come up
	I0829 19:25:39.116449   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:39.117118   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:39.117152   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:39.117059   73485 retry.go:31] will retry after 322.727055ms: waiting for machine to come up
	I0829 19:25:39.443958   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:39.445684   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:39.445714   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:39.445651   73485 retry.go:31] will retry after 416.078639ms: waiting for machine to come up
	I0829 19:25:40.088408   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:40.089030   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:40.089051   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:40.088997   73485 retry.go:31] will retry after 422.595949ms: waiting for machine to come up
	I0829 19:25:40.513727   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:40.514296   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:40.514318   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:40.514253   73485 retry.go:31] will retry after 628.237531ms: waiting for machine to come up
	I0829 19:25:41.143945   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:41.144568   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:41.144606   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:41.144534   73485 retry.go:31] will retry after 717.687857ms: waiting for machine to come up
	I0829 19:25:41.864595   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:41.865127   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:41.865156   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:41.865072   73485 retry.go:31] will retry after 943.929905ms: waiting for machine to come up
	I0829 19:25:42.810538   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:42.810975   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:42.811030   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:42.810954   73485 retry.go:31] will retry after 1.376094879s: waiting for machine to come up
	I0829 19:25:44.188846   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:44.189473   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:44.189502   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:44.189430   73485 retry.go:31] will retry after 1.641087137s: waiting for machine to come up
	I0829 19:25:45.831793   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:45.832319   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:45.832343   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:45.832240   73485 retry.go:31] will retry after 1.766410372s: waiting for machine to come up
	I0829 19:25:47.600049   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:47.600586   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:47.600615   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:47.600543   73485 retry.go:31] will retry after 1.798722294s: waiting for machine to come up
	I0829 19:25:49.401603   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:49.402023   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:49.402043   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:49.401985   73485 retry.go:31] will retry after 3.285295428s: waiting for machine to come up
	I0829 19:25:52.689026   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:52.689521   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:52.689563   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:52.689497   73485 retry.go:31] will retry after 3.491021675s: waiting for machine to come up
	I0829 19:25:56.183088   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:25:56.183494   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:25:56.183516   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:25:56.183451   73485 retry.go:31] will retry after 4.603586632s: waiting for machine to come up
	I0829 19:26:00.791278   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:00.791841   72476 main.go:141] libmachine: (old-k8s-version-467349) Found IP for machine: 192.168.72.112
	I0829 19:26:00.791869   72476 main.go:141] libmachine: (old-k8s-version-467349) Reserving static IP address...
	I0829 19:26:00.791885   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has current primary IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:00.792271   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"} in network mk-old-k8s-version-467349
	I0829 19:26:00.871628   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Getting to WaitForSSH function...
	I0829 19:26:00.871655   72476 main.go:141] libmachine: (old-k8s-version-467349) Reserved static IP address: 192.168.72.112
	I0829 19:26:00.871696   72476 main.go:141] libmachine: (old-k8s-version-467349) Waiting for SSH to be available...
	I0829 19:26:00.874710   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:00.875214   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:00.875242   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:00.875404   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH client type: external
	I0829 19:26:00.875421   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa (-rw-------)
	I0829 19:26:00.875451   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:26:00.875461   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | About to run SSH command:
	I0829 19:26:00.875471   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | exit 0
	I0829 19:26:01.002249   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | SSH cmd err, output: <nil>: 
	I0829 19:26:01.002563   72476 main.go:141] libmachine: (old-k8s-version-467349) KVM machine creation complete!
	I0829 19:26:01.002885   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetConfigRaw
	I0829 19:26:01.003542   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:26:01.003741   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:26:01.003909   72476 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0829 19:26:01.003925   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetState
	I0829 19:26:01.005207   72476 main.go:141] libmachine: Detecting operating system of created instance...
	I0829 19:26:01.005223   72476 main.go:141] libmachine: Waiting for SSH to be available...
	I0829 19:26:01.005229   72476 main.go:141] libmachine: Getting to WaitForSSH function...
	I0829 19:26:01.005235   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:26:01.007617   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.007961   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:01.007999   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.008100   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:26:01.008271   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:01.008433   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:01.008553   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:26:01.008709   72476 main.go:141] libmachine: Using SSH client type: native
	I0829 19:26:01.008968   72476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:26:01.008982   72476 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0829 19:26:01.109699   72476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:26:01.109731   72476 main.go:141] libmachine: Detecting the provisioner...
	I0829 19:26:01.109739   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:26:01.112504   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.112853   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:01.112891   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.113016   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:26:01.113212   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:01.113377   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:01.113504   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:26:01.113652   72476 main.go:141] libmachine: Using SSH client type: native
	I0829 19:26:01.113845   72476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:26:01.113860   72476 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0829 19:26:01.214580   72476 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0829 19:26:01.214661   72476 main.go:141] libmachine: found compatible host: buildroot
	I0829 19:26:01.214672   72476 main.go:141] libmachine: Provisioning with buildroot...
	I0829 19:26:01.214681   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:26:01.214951   72476 buildroot.go:166] provisioning hostname "old-k8s-version-467349"
	I0829 19:26:01.214973   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:26:01.215180   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:26:01.217778   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.218126   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:01.218163   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.218375   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:26:01.218602   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:01.218792   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:01.218988   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:26:01.219157   72476 main.go:141] libmachine: Using SSH client type: native
	I0829 19:26:01.219365   72476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:26:01.219383   72476 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467349 && echo "old-k8s-version-467349" | sudo tee /etc/hostname
	I0829 19:26:01.332537   72476 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467349
	
	I0829 19:26:01.332568   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:26:01.335813   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.336237   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:01.336269   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.336535   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:26:01.336778   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:01.336962   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:01.337147   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:26:01.337310   72476 main.go:141] libmachine: Using SSH client type: native
	I0829 19:26:01.337539   72476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:26:01.337564   72476 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467349/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:26:01.446724   72476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:26:01.446752   72476 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:26:01.446790   72476 buildroot.go:174] setting up certificates
	I0829 19:26:01.446798   72476 provision.go:84] configureAuth start
	I0829 19:26:01.446815   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:26:01.447136   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:26:01.449459   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.449810   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:01.449837   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.449999   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:26:01.452344   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.452670   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:01.452698   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.452827   72476 provision.go:143] copyHostCerts
	I0829 19:26:01.452916   72476 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:26:01.452937   72476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:26:01.453016   72476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:26:01.453142   72476 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:26:01.453153   72476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:26:01.453185   72476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:26:01.453260   72476 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:26:01.453271   72476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:26:01.453307   72476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:26:01.453366   72476 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467349 san=[127.0.0.1 192.168.72.112 localhost minikube old-k8s-version-467349]
	I0829 19:26:01.600377   72476 provision.go:177] copyRemoteCerts
	I0829 19:26:01.600437   72476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:26:01.600460   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:26:01.603109   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.603433   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:01.603471   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.603626   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:26:01.603818   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:01.604015   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:26:01.604179   72476 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:26:01.683772   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:26:01.709312   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 19:26:01.730664   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:26:01.752092   72476 provision.go:87] duration metric: took 305.284546ms to configureAuth
	I0829 19:26:01.752121   72476 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:26:01.752300   72476 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:26:01.752381   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:26:01.754975   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.755266   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:01.755285   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.755488   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:26:01.755741   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:01.755899   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:01.756033   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:26:01.756154   72476 main.go:141] libmachine: Using SSH client type: native
	I0829 19:26:01.756313   72476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:26:01.756338   72476 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:26:01.982764   72476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:26:01.982801   72476 main.go:141] libmachine: Checking connection to Docker...
	I0829 19:26:01.982809   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetURL
	I0829 19:26:01.984158   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using libvirt version 6000000
	I0829 19:26:01.986932   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.987306   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:01.987348   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.987473   72476 main.go:141] libmachine: Docker is up and running!
	I0829 19:26:01.987487   72476 main.go:141] libmachine: Reticulating splines...
	I0829 19:26:01.987494   72476 client.go:171] duration metric: took 25.01868085s to LocalClient.Create
	I0829 19:26:01.987517   72476 start.go:167] duration metric: took 25.018743269s to libmachine.API.Create "old-k8s-version-467349"
	I0829 19:26:01.987531   72476 start.go:293] postStartSetup for "old-k8s-version-467349" (driver="kvm2")
	I0829 19:26:01.987549   72476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:26:01.987578   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:26:01.987872   72476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:26:01.987904   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:26:01.990068   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.990512   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:01.990558   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:01.990731   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:26:01.990963   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:01.991119   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:26:01.991260   72476 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:26:02.072513   72476 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:26:02.076601   72476 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:26:02.076631   72476 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:26:02.076691   72476 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:26:02.076772   72476 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:26:02.076874   72476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:26:02.085564   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:26:02.109139   72476 start.go:296] duration metric: took 121.588549ms for postStartSetup
	I0829 19:26:02.109194   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetConfigRaw
	I0829 19:26:02.109752   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:26:02.112406   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:02.112689   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:02.112717   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:02.112931   72476 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:26:02.113103   72476 start.go:128] duration metric: took 25.165930793s to createHost
	I0829 19:26:02.113124   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:26:02.115270   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:02.115644   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:02.115669   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:02.115812   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:26:02.115961   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:02.116120   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:02.116259   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:26:02.116411   72476 main.go:141] libmachine: Using SSH client type: native
	I0829 19:26:02.116613   72476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:26:02.116633   72476 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:26:02.218533   72476 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724959562.190829152
	
	I0829 19:26:02.218561   72476 fix.go:216] guest clock: 1724959562.190829152
	I0829 19:26:02.218569   72476 fix.go:229] Guest: 2024-08-29 19:26:02.190829152 +0000 UTC Remote: 2024-08-29 19:26:02.113112803 +0000 UTC m=+36.672631465 (delta=77.716349ms)
	I0829 19:26:02.218587   72476 fix.go:200] guest clock delta is within tolerance: 77.716349ms
	I0829 19:26:02.218592   72476 start.go:83] releasing machines lock for "old-k8s-version-467349", held for 25.271592507s
	I0829 19:26:02.218619   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:26:02.218958   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:26:02.221816   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:02.222227   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:02.222251   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:02.222442   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:26:02.222944   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:26:02.223096   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:26:02.223160   72476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:26:02.223197   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:26:02.223338   72476 ssh_runner.go:195] Run: cat /version.json
	I0829 19:26:02.223367   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:26:02.225944   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:02.226297   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:02.226324   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:02.226344   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:02.226516   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:26:02.226702   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:02.226772   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:02.226797   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:02.226851   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:26:02.226947   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:26:02.227018   72476 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:26:02.227071   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:26:02.227218   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:26:02.227367   72476 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:26:02.343053   72476 ssh_runner.go:195] Run: systemctl --version
	I0829 19:26:02.349381   72476 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:26:02.513537   72476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:26:02.519520   72476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:26:02.519590   72476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:26:02.541097   72476 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:26:02.541120   72476 start.go:495] detecting cgroup driver to use...
	I0829 19:26:02.541174   72476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:26:02.560965   72476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:26:02.575453   72476 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:26:02.575520   72476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:26:02.588123   72476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:26:02.600822   72476 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:26:02.719514   72476 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:26:02.875803   72476 docker.go:233] disabling docker service ...
	I0829 19:26:02.875871   72476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:26:02.893341   72476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:26:02.905698   72476 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:26:03.027186   72476 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:26:03.150181   72476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:26:03.166346   72476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:26:03.183767   72476 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 19:26:03.183834   72476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:26:03.193450   72476 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:26:03.193520   72476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:26:03.203490   72476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:26:03.213375   72476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:26:03.223490   72476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:26:03.234150   72476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:26:03.245846   72476 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:26:03.245903   72476 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:26:03.259123   72476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:26:03.269037   72476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:26:03.393272   72476 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:26:03.494809   72476 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:26:03.494878   72476 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:26:03.500345   72476 start.go:563] Will wait 60s for crictl version
	I0829 19:26:03.500413   72476 ssh_runner.go:195] Run: which crictl
	I0829 19:26:03.504421   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:26:03.544090   72476 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:26:03.544183   72476 ssh_runner.go:195] Run: crio --version
	I0829 19:26:03.573516   72476 ssh_runner.go:195] Run: crio --version
	I0829 19:26:03.604835   72476 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 19:26:03.606072   72476 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:26:03.609251   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:03.609695   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:25:52 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:26:03.609726   72476 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:26:03.609945   72476 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 19:26:03.615094   72476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:26:03.628112   72476 kubeadm.go:883] updating cluster {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:26:03.628221   72476 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:26:03.628259   72476 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:26:03.662128   72476 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:26:03.662214   72476 ssh_runner.go:195] Run: which lz4
	I0829 19:26:03.667057   72476 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:26:03.671248   72476 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:26:03.671285   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 19:26:05.178656   72476 crio.go:462] duration metric: took 1.511642738s to copy over tarball
	I0829 19:26:05.178747   72476 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:26:07.889630   72476 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.710846323s)
	I0829 19:26:07.889669   72476 crio.go:469] duration metric: took 2.71097667s to extract the tarball
	I0829 19:26:07.889678   72476 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:26:07.932576   72476 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:26:07.978344   72476 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:26:07.978379   72476 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:26:07.978469   72476 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:26:07.978472   72476 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:26:07.978481   72476 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:26:07.978536   72476 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:26:07.978574   72476 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 19:26:07.978571   72476 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:26:07.978550   72476 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 19:26:07.978798   72476 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:26:07.980032   72476 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:26:07.980195   72476 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:26:07.980206   72476 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 19:26:07.980196   72476 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:26:07.980195   72476 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 19:26:07.980252   72476 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:26:07.980287   72476 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:26:07.980197   72476 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:26:08.221580   72476 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 19:26:08.227787   72476 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:26:08.238898   72476 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 19:26:08.277414   72476 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 19:26:08.277471   72476 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:26:08.277519   72476 ssh_runner.go:195] Run: which crictl
	I0829 19:26:08.286958   72476 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 19:26:08.287001   72476 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:26:08.287037   72476 ssh_runner.go:195] Run: which crictl
	I0829 19:26:08.303913   72476 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 19:26:08.303953   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:26:08.303961   72476 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 19:26:08.303971   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:26:08.304001   72476 ssh_runner.go:195] Run: which crictl
	I0829 19:26:08.324421   72476 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 19:26:08.324850   72476 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:26:08.351166   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:26:08.360329   72476 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:26:08.364918   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:26:08.364993   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:26:08.374521   72476 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:26:08.470714   72476 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 19:26:08.470756   72476 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 19:26:08.470794   72476 ssh_runner.go:195] Run: which crictl
	I0829 19:26:08.473133   72476 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 19:26:08.473167   72476 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:26:08.473200   72476 ssh_runner.go:195] Run: which crictl
	I0829 19:26:08.487957   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:26:08.533076   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:26:08.533107   72476 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 19:26:08.533139   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:26:08.533143   72476 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:26:08.533183   72476 ssh_runner.go:195] Run: which crictl
	I0829 19:26:08.534480   72476 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 19:26:08.534517   72476 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:26:08.534524   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:26:08.534556   72476 ssh_runner.go:195] Run: which crictl
	I0829 19:26:08.534566   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:26:08.573408   72476 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 19:26:08.573482   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:26:08.644663   72476 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 19:26:08.644799   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:26:08.644903   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:26:08.646551   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:26:08.646625   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:26:08.666266   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:26:08.754823   72476 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 19:26:08.754943   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:26:08.757382   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:26:08.760490   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:26:08.778731   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:26:08.833230   72476 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 19:26:08.864001   72476 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 19:26:08.864029   72476 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:26:08.864001   72476 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 19:26:08.899797   72476 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 19:26:09.262717   72476 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:26:09.404620   72476 cache_images.go:92] duration metric: took 1.426219946s to LoadCachedImages
	W0829 19:26:09.404688   72476 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0829 19:26:09.404702   72476 kubeadm.go:934] updating node { 192.168.72.112 8443 v1.20.0 crio true true} ...
	I0829 19:26:09.404817   72476 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467349 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:26:09.404906   72476 ssh_runner.go:195] Run: crio config
	I0829 19:26:09.453863   72476 cni.go:84] Creating CNI manager for ""
	I0829 19:26:09.453888   72476 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:26:09.453905   72476 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:26:09.453941   72476 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467349 NodeName:old-k8s-version-467349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 19:26:09.454154   72476 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467349"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:26:09.454237   72476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 19:26:09.468924   72476 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:26:09.468998   72476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:26:09.478812   72476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 19:26:09.496873   72476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:26:09.514064   72476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 19:26:09.531230   72476 ssh_runner.go:195] Run: grep 192.168.72.112	control-plane.minikube.internal$ /etc/hosts
	I0829 19:26:09.535024   72476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:26:09.547210   72476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:26:09.665622   72476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:26:09.686366   72476 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349 for IP: 192.168.72.112
	I0829 19:26:09.686392   72476 certs.go:194] generating shared ca certs ...
	I0829 19:26:09.686412   72476 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:26:09.686578   72476 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:26:09.686633   72476 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:26:09.686647   72476 certs.go:256] generating profile certs ...
	I0829 19:26:09.686713   72476 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.key
	I0829 19:26:09.686730   72476 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.crt with IP's: []
	I0829 19:26:10.028996   72476 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.crt ...
	I0829 19:26:10.029028   72476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.crt: {Name:mkdaabcd845df255ef35517f0571ba8c107e8bf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:26:10.029194   72476 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.key ...
	I0829 19:26:10.029226   72476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.key: {Name:mk74be38345bff4e20e50de89719e34cacf357b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:26:10.029319   72476 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key.b97fdb0f
	I0829 19:26:10.029340   72476 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt.b97fdb0f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.112]
	I0829 19:26:10.294963   72476 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt.b97fdb0f ...
	I0829 19:26:10.295003   72476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt.b97fdb0f: {Name:mk05072c555b3bb43bb1e9d26677001105861886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:26:10.295214   72476 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key.b97fdb0f ...
	I0829 19:26:10.295234   72476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key.b97fdb0f: {Name:mkc42e0ef44cb95b7555506fe7eb86de37c4cf9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:26:10.295338   72476 certs.go:381] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt.b97fdb0f -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt
	I0829 19:26:10.295454   72476 certs.go:385] copying /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key.b97fdb0f -> /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key
	I0829 19:26:10.295536   72476 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key
	I0829 19:26:10.295559   72476 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.crt with IP's: []
	I0829 19:26:10.452661   72476 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.crt ...
	I0829 19:26:10.452688   72476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.crt: {Name:mkc1d68071cdb4eb2d4cc1f27d80e8d445a1368a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:26:10.471951   72476 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key ...
	I0829 19:26:10.471984   72476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key: {Name:mk888c7ca1e51b20902148eb23dd8f0d68c1f71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:26:10.472207   72476 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:26:10.472258   72476 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:26:10.472274   72476 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:26:10.472309   72476 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:26:10.472360   72476 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:26:10.472391   72476 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:26:10.472440   72476 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:26:10.473034   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:26:10.498359   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:26:10.531987   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:26:10.556436   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:26:10.578859   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 19:26:10.602203   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:26:10.638543   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:26:10.663581   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:26:10.690573   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:26:10.716630   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:26:10.742073   72476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:26:10.768454   72476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:26:10.785387   72476 ssh_runner.go:195] Run: openssl version
	I0829 19:26:10.791668   72476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:26:10.802320   72476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:26:10.806692   72476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:26:10.806760   72476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:26:10.812929   72476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:26:10.823705   72476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:26:10.834683   72476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:26:10.839197   72476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:26:10.839261   72476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:26:10.844680   72476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:26:10.854748   72476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:26:10.864734   72476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:26:10.869188   72476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:26:10.869254   72476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:26:10.874753   72476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:26:10.884786   72476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:26:10.888994   72476 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 19:26:10.889067   72476 kubeadm.go:392] StartCluster: {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:26:10.889158   72476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:26:10.889219   72476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:26:10.933764   72476 cri.go:89] found id: ""
	I0829 19:26:10.933845   72476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:26:10.943605   72476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:26:10.953629   72476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:26:10.966007   72476 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:26:10.966025   72476 kubeadm.go:157] found existing configuration files:
	
	I0829 19:26:10.966099   72476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:26:10.974912   72476 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:26:10.974983   72476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:26:10.984914   72476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:26:10.994285   72476 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:26:10.994348   72476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:26:11.004337   72476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:26:11.013210   72476 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:26:11.013278   72476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:26:11.022429   72476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:26:11.031037   72476 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:26:11.031100   72476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:26:11.041007   72476 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:26:11.311991   72476 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:28:10.041103   72476 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:28:10.041205   72476 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 19:28:10.042675   72476 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:28:10.042738   72476 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:28:10.042842   72476 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:28:10.042978   72476 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:28:10.043100   72476 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:28:10.043190   72476 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:28:10.044898   72476 out.go:235]   - Generating certificates and keys ...
	I0829 19:28:10.044990   72476 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:28:10.045070   72476 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:28:10.045182   72476 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 19:28:10.045277   72476 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 19:28:10.045366   72476 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 19:28:10.045442   72476 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 19:28:10.045507   72476 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 19:28:10.045679   72476 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-467349] and IPs [192.168.72.112 127.0.0.1 ::1]
	I0829 19:28:10.045757   72476 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 19:28:10.045885   72476 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-467349] and IPs [192.168.72.112 127.0.0.1 ::1]
	I0829 19:28:10.045962   72476 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 19:28:10.046056   72476 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 19:28:10.046150   72476 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 19:28:10.046231   72476 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:28:10.046295   72476 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:28:10.046375   72476 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:28:10.046455   72476 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:28:10.046520   72476 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:28:10.046622   72476 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:28:10.046744   72476 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:28:10.046800   72476 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:28:10.046893   72476 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:28:10.048273   72476 out.go:235]   - Booting up control plane ...
	I0829 19:28:10.048365   72476 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:28:10.048449   72476 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:28:10.048539   72476 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:28:10.048645   72476 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:28:10.048814   72476 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:28:10.048876   72476 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:28:10.048954   72476 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:28:10.049123   72476 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:28:10.049259   72476 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:28:10.049448   72476 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:28:10.049555   72476 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:28:10.049710   72476 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:28:10.049773   72476 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:28:10.049930   72476 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:28:10.050024   72476 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:28:10.050313   72476 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:28:10.050332   72476 kubeadm.go:310] 
	I0829 19:28:10.050382   72476 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:28:10.050417   72476 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:28:10.050425   72476 kubeadm.go:310] 
	I0829 19:28:10.050451   72476 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:28:10.050479   72476 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:28:10.050596   72476 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:28:10.050616   72476 kubeadm.go:310] 
	I0829 19:28:10.050758   72476 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:28:10.050806   72476 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:28:10.050855   72476 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:28:10.050864   72476 kubeadm.go:310] 
	I0829 19:28:10.050990   72476 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:28:10.051078   72476 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:28:10.051089   72476 kubeadm.go:310] 
	I0829 19:28:10.051209   72476 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:28:10.051349   72476 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:28:10.051443   72476 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:28:10.051526   72476 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:28:10.051612   72476 kubeadm.go:310] 
	W0829 19:28:10.051674   72476 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-467349] and IPs [192.168.72.112 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-467349] and IPs [192.168.72.112 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-467349] and IPs [192.168.72.112 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-467349] and IPs [192.168.72.112 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 19:28:10.051714   72476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:28:11.228861   72476 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.17711676s)
	I0829 19:28:11.228948   72476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:28:11.242402   72476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:28:11.253243   72476 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:28:11.253268   72476 kubeadm.go:157] found existing configuration files:
	
	I0829 19:28:11.253314   72476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:28:11.262152   72476 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:28:11.262220   72476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:28:11.271200   72476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:28:11.279708   72476 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:28:11.279755   72476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:28:11.288367   72476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:28:11.296691   72476 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:28:11.296754   72476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:28:11.305678   72476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:28:11.314291   72476 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:28:11.314343   72476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:28:11.324344   72476 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:28:11.535259   72476 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:30:07.919044   72476 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:30:07.919157   72476 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 19:30:07.920732   72476 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:30:07.920803   72476 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:30:07.920890   72476 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:30:07.920991   72476 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:30:07.921084   72476 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:30:07.921148   72476 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:30:07.922964   72476 out.go:235]   - Generating certificates and keys ...
	I0829 19:30:07.923029   72476 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:30:07.923083   72476 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:30:07.923148   72476 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:30:07.923201   72476 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:30:07.923268   72476 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:30:07.923346   72476 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:30:07.923413   72476 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:30:07.923491   72476 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:30:07.923581   72476 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:30:07.923665   72476 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:30:07.923720   72476 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:30:07.923777   72476 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:30:07.923842   72476 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:30:07.923913   72476 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:30:07.924000   72476 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:30:07.924087   72476 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:30:07.924193   72476 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:30:07.924265   72476 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:30:07.924319   72476 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:30:07.924375   72476 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:30:07.925721   72476 out.go:235]   - Booting up control plane ...
	I0829 19:30:07.925796   72476 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:30:07.925868   72476 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:30:07.925944   72476 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:30:07.926011   72476 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:30:07.926175   72476 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:30:07.926237   72476 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:30:07.926317   72476 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:30:07.926535   72476 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:30:07.926639   72476 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:30:07.926934   72476 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:30:07.927031   72476 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:30:07.927281   72476 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:30:07.927403   72476 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:30:07.927665   72476 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:30:07.927773   72476 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:30:07.928017   72476 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:30:07.928026   72476 kubeadm.go:310] 
	I0829 19:30:07.928065   72476 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:30:07.928100   72476 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:30:07.928106   72476 kubeadm.go:310] 
	I0829 19:30:07.928138   72476 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:30:07.928167   72476 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:30:07.928253   72476 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:30:07.928260   72476 kubeadm.go:310] 
	I0829 19:30:07.928370   72476 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:30:07.928405   72476 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:30:07.928451   72476 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:30:07.928464   72476 kubeadm.go:310] 
	I0829 19:30:07.928583   72476 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:30:07.928678   72476 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:30:07.928687   72476 kubeadm.go:310] 
	I0829 19:30:07.928839   72476 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:30:07.928961   72476 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:30:07.929073   72476 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:30:07.929135   72476 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:30:07.929190   72476 kubeadm.go:394] duration metric: took 3m57.040128964s to StartCluster
	I0829 19:30:07.929203   72476 kubeadm.go:310] 
	I0829 19:30:07.929236   72476 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:30:07.929288   72476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:30:07.978869   72476 cri.go:89] found id: ""
	I0829 19:30:07.978901   72476 logs.go:276] 0 containers: []
	W0829 19:30:07.978914   72476 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:30:07.978922   72476 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:30:07.978986   72476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:30:08.011942   72476 cri.go:89] found id: ""
	I0829 19:30:08.011971   72476 logs.go:276] 0 containers: []
	W0829 19:30:08.011980   72476 logs.go:278] No container was found matching "etcd"
	I0829 19:30:08.011986   72476 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:30:08.012035   72476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:30:08.048777   72476 cri.go:89] found id: ""
	I0829 19:30:08.048809   72476 logs.go:276] 0 containers: []
	W0829 19:30:08.048819   72476 logs.go:278] No container was found matching "coredns"
	I0829 19:30:08.048826   72476 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:30:08.048889   72476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:30:08.079347   72476 cri.go:89] found id: ""
	I0829 19:30:08.079370   72476 logs.go:276] 0 containers: []
	W0829 19:30:08.079377   72476 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:30:08.079382   72476 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:30:08.079428   72476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:30:08.112139   72476 cri.go:89] found id: ""
	I0829 19:30:08.112170   72476 logs.go:276] 0 containers: []
	W0829 19:30:08.112180   72476 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:30:08.112189   72476 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:30:08.112259   72476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:30:08.144920   72476 cri.go:89] found id: ""
	I0829 19:30:08.144951   72476 logs.go:276] 0 containers: []
	W0829 19:30:08.144963   72476 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:30:08.144971   72476 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:30:08.145040   72476 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:30:08.176218   72476 cri.go:89] found id: ""
	I0829 19:30:08.176247   72476 logs.go:276] 0 containers: []
	W0829 19:30:08.176259   72476 logs.go:278] No container was found matching "kindnet"
	I0829 19:30:08.176271   72476 logs.go:123] Gathering logs for kubelet ...
	I0829 19:30:08.176290   72476 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:30:08.225527   72476 logs.go:123] Gathering logs for dmesg ...
	I0829 19:30:08.225566   72476 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:30:08.238153   72476 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:30:08.238187   72476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:30:08.346372   72476 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:30:08.346394   72476 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:30:08.346409   72476 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:30:08.449499   72476 logs.go:123] Gathering logs for container status ...
	I0829 19:30:08.449535   72476 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0829 19:30:08.485867   72476 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 19:30:08.485924   72476 out.go:270] * 
	* 
	W0829 19:30:08.485992   72476 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:30:08.486010   72476 out.go:270] * 
	* 
	W0829 19:30:08.486843   72476 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:30:08.489379   72476 out.go:201] 
	W0829 19:30:08.490524   72476 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:30:08.490584   72476 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 19:30:08.490611   72476 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 19:30:08.491987   72476 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-467349 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 6 (220.602088ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:30:08.765042   79025 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-467349" does not appear in /home/jenkins/minikube-integration/19531-13056/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-467349" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (283.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-690795 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-690795 --alsologtostderr -v=3: exit status 82 (2m0.515282212s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-690795"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:27:24.112913   77896 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:27:24.113014   77896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:27:24.113021   77896 out.go:358] Setting ErrFile to fd 2...
	I0829 19:27:24.113032   77896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:27:24.113197   77896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:27:24.113393   77896 out.go:352] Setting JSON to false
	I0829 19:27:24.113460   77896 mustload.go:65] Loading cluster: no-preload-690795
	I0829 19:27:24.113764   77896 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:27:24.113825   77896 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/config.json ...
	I0829 19:27:24.113985   77896 mustload.go:65] Loading cluster: no-preload-690795
	I0829 19:27:24.114081   77896 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:27:24.114121   77896 stop.go:39] StopHost: no-preload-690795
	I0829 19:27:24.115167   77896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:27:24.115373   77896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:27:24.130723   77896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0829 19:27:24.131232   77896 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:27:24.131738   77896 main.go:141] libmachine: Using API Version  1
	I0829 19:27:24.131758   77896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:27:24.132089   77896 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:27:24.134345   77896 out.go:177] * Stopping node "no-preload-690795"  ...
	I0829 19:27:24.135690   77896 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 19:27:24.135722   77896 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:27:24.135947   77896 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 19:27:24.135971   77896 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:27:24.139048   77896 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:27:24.139483   77896 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:26:17 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:27:24.139513   77896 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:27:24.139736   77896 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:27:24.139911   77896 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:27:24.140061   77896 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:27:24.140191   77896 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:27:24.242517   77896 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 19:27:24.306424   77896 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 19:27:24.380844   77896 main.go:141] libmachine: Stopping "no-preload-690795"...
	I0829 19:27:24.380875   77896 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:27:24.382471   77896 main.go:141] libmachine: (no-preload-690795) Calling .Stop
	I0829 19:27:24.386496   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 0/120
	I0829 19:27:25.388793   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 1/120
	I0829 19:27:26.390405   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 2/120
	I0829 19:27:27.392579   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 3/120
	I0829 19:27:28.393845   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 4/120
	I0829 19:27:29.395817   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 5/120
	I0829 19:27:30.397052   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 6/120
	I0829 19:27:31.398464   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 7/120
	I0829 19:27:32.400607   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 8/120
	I0829 19:27:33.402039   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 9/120
	I0829 19:27:34.404042   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 10/120
	I0829 19:27:35.405442   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 11/120
	I0829 19:27:36.406801   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 12/120
	I0829 19:27:37.409008   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 13/120
	I0829 19:27:38.410757   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 14/120
	I0829 19:27:39.412138   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 15/120
	I0829 19:27:40.413993   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 16/120
	I0829 19:27:41.415740   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 17/120
	I0829 19:27:42.416987   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 18/120
	I0829 19:27:43.419456   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 19/120
	I0829 19:27:44.421769   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 20/120
	I0829 19:27:45.423472   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 21/120
	I0829 19:27:46.425010   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 22/120
	I0829 19:27:47.426698   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 23/120
	I0829 19:27:48.428720   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 24/120
	I0829 19:27:49.430840   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 25/120
	I0829 19:27:50.432788   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 26/120
	I0829 19:27:51.434326   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 27/120
	I0829 19:27:52.435844   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 28/120
	I0829 19:27:53.437419   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 29/120
	I0829 19:27:54.439337   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 30/120
	I0829 19:27:55.440885   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 31/120
	I0829 19:27:56.442393   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 32/120
	I0829 19:27:57.443769   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 33/120
	I0829 19:27:58.445147   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 34/120
	I0829 19:27:59.446998   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 35/120
	I0829 19:28:00.448350   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 36/120
	I0829 19:28:01.449705   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 37/120
	I0829 19:28:02.451112   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 38/120
	I0829 19:28:03.452611   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 39/120
	I0829 19:28:04.454644   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 40/120
	I0829 19:28:05.456097   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 41/120
	I0829 19:28:06.457472   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 42/120
	I0829 19:28:07.458844   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 43/120
	I0829 19:28:08.460142   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 44/120
	I0829 19:28:09.462057   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 45/120
	I0829 19:28:10.463551   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 46/120
	I0829 19:28:11.465980   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 47/120
	I0829 19:28:12.467705   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 48/120
	I0829 19:28:13.469345   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 49/120
	I0829 19:28:14.471568   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 50/120
	I0829 19:28:15.473307   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 51/120
	I0829 19:28:16.474640   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 52/120
	I0829 19:28:17.476107   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 53/120
	I0829 19:28:18.477321   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 54/120
	I0829 19:28:19.479501   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 55/120
	I0829 19:28:20.480772   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 56/120
	I0829 19:28:21.482216   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 57/120
	I0829 19:28:22.483592   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 58/120
	I0829 19:28:23.485089   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 59/120
	I0829 19:28:24.487311   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 60/120
	I0829 19:28:25.488658   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 61/120
	I0829 19:28:26.490018   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 62/120
	I0829 19:28:27.491595   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 63/120
	I0829 19:28:28.493147   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 64/120
	I0829 19:28:29.495172   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 65/120
	I0829 19:28:30.496538   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 66/120
	I0829 19:28:31.498049   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 67/120
	I0829 19:28:32.499263   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 68/120
	I0829 19:28:33.500708   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 69/120
	I0829 19:28:34.503020   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 70/120
	I0829 19:28:35.504245   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 71/120
	I0829 19:28:36.505932   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 72/120
	I0829 19:28:37.507152   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 73/120
	I0829 19:28:38.508490   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 74/120
	I0829 19:28:39.510525   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 75/120
	I0829 19:28:40.512583   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 76/120
	I0829 19:28:41.514029   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 77/120
	I0829 19:28:42.515379   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 78/120
	I0829 19:28:43.516878   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 79/120
	I0829 19:28:44.519019   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 80/120
	I0829 19:28:45.520409   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 81/120
	I0829 19:28:46.521659   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 82/120
	I0829 19:28:47.522973   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 83/120
	I0829 19:28:48.524257   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 84/120
	I0829 19:28:49.526165   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 85/120
	I0829 19:28:50.527727   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 86/120
	I0829 19:28:51.529144   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 87/120
	I0829 19:28:52.530555   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 88/120
	I0829 19:28:53.531874   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 89/120
	I0829 19:28:54.534358   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 90/120
	I0829 19:28:55.535796   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 91/120
	I0829 19:28:56.537105   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 92/120
	I0829 19:28:57.538684   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 93/120
	I0829 19:28:58.539987   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 94/120
	I0829 19:28:59.542131   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 95/120
	I0829 19:29:00.543874   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 96/120
	I0829 19:29:01.545432   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 97/120
	I0829 19:29:02.546917   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 98/120
	I0829 19:29:03.548194   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 99/120
	I0829 19:29:04.550498   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 100/120
	I0829 19:29:05.551799   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 101/120
	I0829 19:29:06.553116   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 102/120
	I0829 19:29:07.554690   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 103/120
	I0829 19:29:08.556066   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 104/120
	I0829 19:29:09.558310   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 105/120
	I0829 19:29:10.560467   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 106/120
	I0829 19:29:11.561781   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 107/120
	I0829 19:29:12.563180   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 108/120
	I0829 19:29:13.564812   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 109/120
	I0829 19:29:14.567209   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 110/120
	I0829 19:29:15.568994   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 111/120
	I0829 19:29:16.570397   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 112/120
	I0829 19:29:17.571732   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 113/120
	I0829 19:29:18.573180   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 114/120
	I0829 19:29:19.575247   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 115/120
	I0829 19:29:20.576968   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 116/120
	I0829 19:29:21.578446   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 117/120
	I0829 19:29:22.579947   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 118/120
	I0829 19:29:23.581179   77896 main.go:141] libmachine: (no-preload-690795) Waiting for machine to stop 119/120
	I0829 19:29:24.581914   77896 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0829 19:29:24.581966   77896 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0829 19:29:24.583751   77896 out.go:201] 
	W0829 19:29:24.584943   77896 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0829 19:29:24.584960   77896 out.go:270] * 
	* 
	W0829 19:29:24.587653   77896 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:29:24.588783   77896 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-690795 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-690795 -n no-preload-690795
E0829 19:29:25.888539   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:31.018046   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:32.802407   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:32.808801   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:32.820136   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:32.841503   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:32.882888   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:32.964347   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:33.125912   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:33.447669   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:34.089339   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:35.370720   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-690795 -n no-preload-690795: exit status 3 (18.544389763s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:29:43.134528   78612 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host
	E0829 19:29:43.134554   78612 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-690795" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-920571 --alsologtostderr -v=3
E0829 19:28:03.950361   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:03.956773   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:03.968108   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:03.989508   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:04.030891   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:04.112384   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:04.273882   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:04.595952   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:05.237369   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:06.518839   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:09.081156   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:14.203331   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:24.445427   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:26.706081   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-920571 --alsologtostderr -v=3: exit status 82 (2m0.477714109s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-920571"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:27:37.365530   78063 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:27:37.365661   78063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:27:37.365670   78063 out.go:358] Setting ErrFile to fd 2...
	I0829 19:27:37.365675   78063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:27:37.365828   78063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:27:37.366043   78063 out.go:352] Setting JSON to false
	I0829 19:27:37.366139   78063 mustload.go:65] Loading cluster: embed-certs-920571
	I0829 19:27:37.366893   78063 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:27:37.367079   78063 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/config.json ...
	I0829 19:27:37.367341   78063 mustload.go:65] Loading cluster: embed-certs-920571
	I0829 19:27:37.367617   78063 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:27:37.367707   78063 stop.go:39] StopHost: embed-certs-920571
	I0829 19:27:37.368493   78063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:27:37.368547   78063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:27:37.383521   78063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44571
	I0829 19:27:37.384160   78063 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:27:37.384738   78063 main.go:141] libmachine: Using API Version  1
	I0829 19:27:37.384763   78063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:27:37.385127   78063 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:27:37.387613   78063 out.go:177] * Stopping node "embed-certs-920571"  ...
	I0829 19:27:37.388876   78063 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 19:27:37.388905   78063 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:27:37.389104   78063 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 19:27:37.389130   78063 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:27:37.392655   78063 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:27:37.393306   78063 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:26:42 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:27:37.393312   78063 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:27:37.393335   78063 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:27:37.393515   78063 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:27:37.393651   78063 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:27:37.393796   78063 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:27:37.485756   78063 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 19:27:37.543510   78063 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 19:27:37.590644   78063 main.go:141] libmachine: Stopping "embed-certs-920571"...
	I0829 19:27:37.590680   78063 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:27:37.592548   78063 main.go:141] libmachine: (embed-certs-920571) Calling .Stop
	I0829 19:27:37.596798   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 0/120
	I0829 19:27:38.598968   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 1/120
	I0829 19:27:39.600754   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 2/120
	I0829 19:27:40.602760   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 3/120
	I0829 19:27:41.604787   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 4/120
	I0829 19:27:42.606863   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 5/120
	I0829 19:27:43.608383   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 6/120
	I0829 19:27:44.609859   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 7/120
	I0829 19:27:45.611383   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 8/120
	I0829 19:27:46.612911   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 9/120
	I0829 19:27:47.615173   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 10/120
	I0829 19:27:48.616637   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 11/120
	I0829 19:27:49.618551   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 12/120
	I0829 19:27:50.620167   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 13/120
	I0829 19:27:51.621493   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 14/120
	I0829 19:27:52.623471   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 15/120
	I0829 19:27:53.624879   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 16/120
	I0829 19:27:54.626369   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 17/120
	I0829 19:27:55.627773   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 18/120
	I0829 19:27:56.629147   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 19/120
	I0829 19:27:57.631405   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 20/120
	I0829 19:27:58.633010   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 21/120
	I0829 19:27:59.634282   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 22/120
	I0829 19:28:00.635597   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 23/120
	I0829 19:28:01.636789   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 24/120
	I0829 19:28:02.638881   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 25/120
	I0829 19:28:03.640366   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 26/120
	I0829 19:28:04.641780   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 27/120
	I0829 19:28:05.643023   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 28/120
	I0829 19:28:06.644427   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 29/120
	I0829 19:28:07.646365   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 30/120
	I0829 19:28:08.648417   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 31/120
	I0829 19:28:09.649726   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 32/120
	I0829 19:28:10.651488   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 33/120
	I0829 19:28:11.652873   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 34/120
	I0829 19:28:12.654861   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 35/120
	I0829 19:28:13.656659   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 36/120
	I0829 19:28:14.658501   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 37/120
	I0829 19:28:15.660347   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 38/120
	I0829 19:28:16.661711   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 39/120
	I0829 19:28:17.663854   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 40/120
	I0829 19:28:18.665229   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 41/120
	I0829 19:28:19.666714   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 42/120
	I0829 19:28:20.668547   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 43/120
	I0829 19:28:21.669902   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 44/120
	I0829 19:28:22.672075   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 45/120
	I0829 19:28:23.673259   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 46/120
	I0829 19:28:24.674687   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 47/120
	I0829 19:28:25.676336   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 48/120
	I0829 19:28:26.677744   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 49/120
	I0829 19:28:27.679855   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 50/120
	I0829 19:28:28.681299   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 51/120
	I0829 19:28:29.683297   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 52/120
	I0829 19:28:30.684736   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 53/120
	I0829 19:28:31.686169   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 54/120
	I0829 19:28:32.688140   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 55/120
	I0829 19:28:33.689808   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 56/120
	I0829 19:28:34.691123   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 57/120
	I0829 19:28:35.692395   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 58/120
	I0829 19:28:36.693864   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 59/120
	I0829 19:28:37.695645   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 60/120
	I0829 19:28:38.696917   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 61/120
	I0829 19:28:39.698376   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 62/120
	I0829 19:28:40.699706   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 63/120
	I0829 19:28:41.701096   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 64/120
	I0829 19:28:42.702964   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 65/120
	I0829 19:28:43.704483   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 66/120
	I0829 19:28:44.705714   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 67/120
	I0829 19:28:45.707164   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 68/120
	I0829 19:28:46.709346   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 69/120
	I0829 19:28:47.711434   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 70/120
	I0829 19:28:48.712725   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 71/120
	I0829 19:28:49.714344   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 72/120
	I0829 19:28:50.716787   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 73/120
	I0829 19:28:51.718263   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 74/120
	I0829 19:28:52.720258   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 75/120
	I0829 19:28:53.721821   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 76/120
	I0829 19:28:54.723167   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 77/120
	I0829 19:28:55.724567   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 78/120
	I0829 19:28:56.726020   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 79/120
	I0829 19:28:57.728296   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 80/120
	I0829 19:28:58.729827   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 81/120
	I0829 19:28:59.731155   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 82/120
	I0829 19:29:00.732605   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 83/120
	I0829 19:29:01.733893   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 84/120
	I0829 19:29:02.735986   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 85/120
	I0829 19:29:03.737418   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 86/120
	I0829 19:29:04.738882   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 87/120
	I0829 19:29:05.740211   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 88/120
	I0829 19:29:06.741499   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 89/120
	I0829 19:29:07.743046   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 90/120
	I0829 19:29:08.744331   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 91/120
	I0829 19:29:09.745880   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 92/120
	I0829 19:29:10.747267   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 93/120
	I0829 19:29:11.748639   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 94/120
	I0829 19:29:12.751220   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 95/120
	I0829 19:29:13.752543   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 96/120
	I0829 19:29:14.753821   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 97/120
	I0829 19:29:15.755267   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 98/120
	I0829 19:29:16.756806   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 99/120
	I0829 19:29:17.758910   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 100/120
	I0829 19:29:18.760407   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 101/120
	I0829 19:29:19.761879   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 102/120
	I0829 19:29:20.763231   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 103/120
	I0829 19:29:21.764649   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 104/120
	I0829 19:29:22.766977   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 105/120
	I0829 19:29:23.768341   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 106/120
	I0829 19:29:24.769700   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 107/120
	I0829 19:29:25.771131   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 108/120
	I0829 19:29:26.772579   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 109/120
	I0829 19:29:27.773983   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 110/120
	I0829 19:29:28.775343   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 111/120
	I0829 19:29:29.776785   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 112/120
	I0829 19:29:30.778207   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 113/120
	I0829 19:29:31.779771   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 114/120
	I0829 19:29:32.782016   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 115/120
	I0829 19:29:33.783430   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 116/120
	I0829 19:29:34.784925   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 117/120
	I0829 19:29:35.786381   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 118/120
	I0829 19:29:36.787972   78063 main.go:141] libmachine: (embed-certs-920571) Waiting for machine to stop 119/120
	I0829 19:29:37.789566   78063 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0829 19:29:37.789620   78063 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0829 19:29:37.791620   78063 out.go:201] 
	W0829 19:29:37.792801   78063 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0829 19:29:37.792818   78063 out.go:270] * 
	* 
	W0829 19:29:37.795647   78063 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:29:37.797873   78063 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-920571 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-920571 -n embed-certs-920571
E0829 19:29:37.932675   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:43.054665   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-920571 -n embed-certs-920571: exit status 3 (18.647239506s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:29:56.446406   78690 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host
	E0829 19:29:56.446426   78690 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-920571" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-672127 --alsologtostderr -v=3
E0829 19:28:50.041328   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:50.047672   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:50.059035   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:50.080500   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:50.121943   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:50.203419   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:50.365014   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:50.687123   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:51.329125   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:52.610902   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:28:55.172902   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:00.294310   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:10.536609   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-672127 --alsologtostderr -v=3: exit status 82 (2m0.497086688s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-672127"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:28:47.192327   78469 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:28:47.192577   78469 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:28:47.192587   78469 out.go:358] Setting ErrFile to fd 2...
	I0829 19:28:47.192591   78469 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:28:47.192753   78469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:28:47.193002   78469 out.go:352] Setting JSON to false
	I0829 19:28:47.193074   78469 mustload.go:65] Loading cluster: default-k8s-diff-port-672127
	I0829 19:28:47.193367   78469 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:28:47.193439   78469 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/config.json ...
	I0829 19:28:47.193605   78469 mustload.go:65] Loading cluster: default-k8s-diff-port-672127
	I0829 19:28:47.193705   78469 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:28:47.193728   78469 stop.go:39] StopHost: default-k8s-diff-port-672127
	I0829 19:28:47.194050   78469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:28:47.194115   78469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:28:47.208570   78469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46653
	I0829 19:28:47.209060   78469 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:28:47.209688   78469 main.go:141] libmachine: Using API Version  1
	I0829 19:28:47.209706   78469 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:28:47.210064   78469 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:28:47.212406   78469 out.go:177] * Stopping node "default-k8s-diff-port-672127"  ...
	I0829 19:28:47.213660   78469 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0829 19:28:47.213683   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:28:47.213913   78469 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0829 19:28:47.213944   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:28:47.216925   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:28:47.217404   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:27:24 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:28:47.217443   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:28:47.217598   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:28:47.217803   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:28:47.217962   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:28:47.218138   78469 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:28:47.319295   78469 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0829 19:28:47.377356   78469 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0829 19:28:47.444863   78469 main.go:141] libmachine: Stopping "default-k8s-diff-port-672127"...
	I0829 19:28:47.444894   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:28:47.446497   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Stop
	I0829 19:28:47.450159   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 0/120
	I0829 19:28:48.451473   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 1/120
	I0829 19:28:49.452994   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 2/120
	I0829 19:28:50.454604   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 3/120
	I0829 19:28:51.456019   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 4/120
	I0829 19:28:52.458484   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 5/120
	I0829 19:28:53.459851   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 6/120
	I0829 19:28:54.461390   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 7/120
	I0829 19:28:55.463326   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 8/120
	I0829 19:28:56.464664   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 9/120
	I0829 19:28:57.466356   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 10/120
	I0829 19:28:58.467855   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 11/120
	I0829 19:28:59.469531   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 12/120
	I0829 19:29:00.470994   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 13/120
	I0829 19:29:01.472493   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 14/120
	I0829 19:29:02.474842   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 15/120
	I0829 19:29:03.476129   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 16/120
	I0829 19:29:04.477666   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 17/120
	I0829 19:29:05.478861   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 18/120
	I0829 19:29:06.480250   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 19/120
	I0829 19:29:07.482756   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 20/120
	I0829 19:29:08.484280   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 21/120
	I0829 19:29:09.485851   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 22/120
	I0829 19:29:10.487202   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 23/120
	I0829 19:29:11.488637   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 24/120
	I0829 19:29:12.490787   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 25/120
	I0829 19:29:13.492322   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 26/120
	I0829 19:29:14.493927   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 27/120
	I0829 19:29:15.495353   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 28/120
	I0829 19:29:16.496871   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 29/120
	I0829 19:29:17.499268   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 30/120
	I0829 19:29:18.500938   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 31/120
	I0829 19:29:19.502500   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 32/120
	I0829 19:29:20.504096   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 33/120
	I0829 19:29:21.505520   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 34/120
	I0829 19:29:22.507413   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 35/120
	I0829 19:29:23.508759   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 36/120
	I0829 19:29:24.510408   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 37/120
	I0829 19:29:25.511936   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 38/120
	I0829 19:29:26.513354   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 39/120
	I0829 19:29:27.514865   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 40/120
	I0829 19:29:28.516463   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 41/120
	I0829 19:29:29.518164   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 42/120
	I0829 19:29:30.519570   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 43/120
	I0829 19:29:31.521108   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 44/120
	I0829 19:29:32.523161   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 45/120
	I0829 19:29:33.524751   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 46/120
	I0829 19:29:34.526725   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 47/120
	I0829 19:29:35.528180   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 48/120
	I0829 19:29:36.529832   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 49/120
	I0829 19:29:37.532065   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 50/120
	I0829 19:29:38.533583   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 51/120
	I0829 19:29:39.535049   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 52/120
	I0829 19:29:40.536870   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 53/120
	I0829 19:29:41.538508   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 54/120
	I0829 19:29:42.540614   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 55/120
	I0829 19:29:43.542124   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 56/120
	I0829 19:29:44.543869   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 57/120
	I0829 19:29:45.545624   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 58/120
	I0829 19:29:46.547377   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 59/120
	I0829 19:29:47.549829   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 60/120
	I0829 19:29:48.551296   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 61/120
	I0829 19:29:49.552787   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 62/120
	I0829 19:29:50.554178   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 63/120
	I0829 19:29:51.555556   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 64/120
	I0829 19:29:52.557613   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 65/120
	I0829 19:29:53.559079   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 66/120
	I0829 19:29:54.560514   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 67/120
	I0829 19:29:55.561784   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 68/120
	I0829 19:29:56.563043   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 69/120
	I0829 19:29:57.564886   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 70/120
	I0829 19:29:58.566565   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 71/120
	I0829 19:29:59.568596   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 72/120
	I0829 19:30:00.569991   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 73/120
	I0829 19:30:01.571364   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 74/120
	I0829 19:30:02.573283   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 75/120
	I0829 19:30:03.574785   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 76/120
	I0829 19:30:04.576188   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 77/120
	I0829 19:30:05.577557   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 78/120
	I0829 19:30:06.578886   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 79/120
	I0829 19:30:07.580252   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 80/120
	I0829 19:30:08.581790   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 81/120
	I0829 19:30:09.583123   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 82/120
	I0829 19:30:10.584590   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 83/120
	I0829 19:30:11.586048   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 84/120
	I0829 19:30:12.588115   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 85/120
	I0829 19:30:13.589553   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 86/120
	I0829 19:30:14.590868   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 87/120
	I0829 19:30:15.592847   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 88/120
	I0829 19:30:16.594224   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 89/120
	I0829 19:30:17.596448   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 90/120
	I0829 19:30:18.597621   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 91/120
	I0829 19:30:19.598893   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 92/120
	I0829 19:30:20.599975   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 93/120
	I0829 19:30:21.601144   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 94/120
	I0829 19:30:22.603051   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 95/120
	I0829 19:30:23.604520   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 96/120
	I0829 19:30:24.605872   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 97/120
	I0829 19:30:25.607148   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 98/120
	I0829 19:30:26.608640   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 99/120
	I0829 19:30:27.610874   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 100/120
	I0829 19:30:28.612578   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 101/120
	I0829 19:30:29.613854   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 102/120
	I0829 19:30:30.615170   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 103/120
	I0829 19:30:31.616407   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 104/120
	I0829 19:30:32.618570   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 105/120
	I0829 19:30:33.619879   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 106/120
	I0829 19:30:34.621216   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 107/120
	I0829 19:30:35.622786   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 108/120
	I0829 19:30:36.624207   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 109/120
	I0829 19:30:37.626512   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 110/120
	I0829 19:30:38.627853   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 111/120
	I0829 19:30:39.629511   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 112/120
	I0829 19:30:40.631683   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 113/120
	I0829 19:30:41.633247   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 114/120
	I0829 19:30:42.635540   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 115/120
	I0829 19:30:43.637118   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 116/120
	I0829 19:30:44.638697   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 117/120
	I0829 19:30:45.640020   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 118/120
	I0829 19:30:46.641400   78469 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for machine to stop 119/120
	I0829 19:30:47.642767   78469 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0829 19:30:47.642817   78469 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0829 19:30:47.644808   78469 out.go:201] 
	W0829 19:30:47.646224   78469 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0829 19:30:47.646241   78469 out.go:270] * 
	* 
	W0829 19:30:47.649101   78469 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:30:47.650349   78469 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-672127 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127
E0829 19:30:47.810221   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:49.950943   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:49.957291   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:49.968634   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:49.990006   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:50.031368   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:50.112887   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:50.274390   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:50.595764   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:51.237055   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:52.519123   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:53.452549   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:54.740295   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:55.081062   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:00.202946   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127: exit status 3 (18.425953464s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:31:06.078455   79353 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.70:22: connect: no route to host
	E0829 19:31:06.078481   79353 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.70:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-672127" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-690795 -n no-preload-690795
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-690795 -n no-preload-690795: exit status 3 (3.167754113s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:29:46.302492   78737 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host
	E0829 19:29:46.302517   78737 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-690795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0829 19:29:49.632194   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:49.776726   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-690795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152813103s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-690795 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-690795 -n no-preload-690795
E0829 19:29:53.296856   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-690795 -n no-preload-690795: exit status 3 (3.062882453s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:29:55.518497   78818 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host
	E0829 19:29:55.518517   78818 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-690795" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-920571 -n embed-certs-920571
E0829 19:29:57.001107   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:57.007553   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:57.018937   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:57.040368   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:57.081809   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:57.163327   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:57.325156   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:57.647285   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:58.289376   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:29:59.570883   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-920571 -n embed-certs-920571: exit status 3 (3.167920327s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:29:59.614480   78906 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host
	E0829 19:29:59.614502   78906 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-920571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0829 19:30:02.132248   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-920571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152879732s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-920571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-920571 -n embed-certs-920571
E0829 19:30:07.254330   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-920571 -n embed-certs-920571: exit status 3 (3.063070135s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:30:08.830444   78987 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host
	E0829 19:30:08.830458   78987 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.243:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-920571" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-467349 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-467349 create -f testdata/busybox.yaml: exit status 1 (42.625503ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-467349" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-467349 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 6 (215.674842ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:30:09.023705   79064 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-467349" does not appear in /home/jenkins/minikube-integration/19531-13056/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-467349" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 6 (208.522636ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:30:09.232530   79128 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-467349" does not appear in /home/jenkins/minikube-integration/19531-13056/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-467349" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-467349 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0829 19:30:11.979721   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:12.474826   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:12.481134   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:12.492457   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:12.513843   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:12.555269   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:12.637551   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:12.799143   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:13.121084   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:13.763041   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:13.778431   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:15.044722   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:17.496694   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:17.606768   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:22.728442   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:32.970478   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:30:37.978077   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-467349 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m46.734684461s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-467349 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-467349 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-467349 describe deploy/metrics-server -n kube-system: exit status 1 (43.074245ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-467349" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-467349 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 6 (214.029292ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:31:56.224490   79754 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-467349" does not appear in /home/jenkins/minikube-integration/19531-13056/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-467349" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127: exit status 3 (3.167766794s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:31:09.246459   79447 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.70:22: connect: no route to host
	E0829 19:31:09.246486   79447 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.70:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-672127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0829 19:31:10.444942   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-672127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152132437s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.70:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-672127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127: exit status 3 (3.063561614s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:31:18.462472   79528 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.70:22: connect: no route to host
	E0829 19:31:18.462494   79528 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.70:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-672127" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (702.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-467349 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0829 19:32:01.660187   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:32:11.888237   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:32:16.662439   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:32:22.142075   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:32:40.861201   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:32:56.336143   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:33:03.103483   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:33:03.950620   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:33:26.706714   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:33:31.652382   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:33:33.809762   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:33:50.041531   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:34:17.744314   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:34:25.025008   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:34:32.802293   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:34:49.632313   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:34:57.000309   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:35:00.504241   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:35:12.475072   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:35:24.702657   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:35:40.177570   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:35:49.951233   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:36:12.703175   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:36:17.651358   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:36:41.163707   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:37:08.867236   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:38:03.951214   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:38:26.706301   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:38:50.041037   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:39:32.801772   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:39:49.631849   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:39:57.000341   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-467349 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m39.333546268s)

                                                
                                                
-- stdout --
	* [old-k8s-version-467349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-467349" primary control-plane node in "old-k8s-version-467349" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-467349" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:31:58.737382   79869 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:31:58.737475   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737483   79869 out.go:358] Setting ErrFile to fd 2...
	I0829 19:31:58.737486   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737664   79869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:31:58.738216   79869 out.go:352] Setting JSON to false
	I0829 19:31:58.739096   79869 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8066,"bootTime":1724951853,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:31:58.739164   79869 start.go:139] virtualization: kvm guest
	I0829 19:31:58.741047   79869 out.go:177] * [old-k8s-version-467349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:31:58.742202   79869 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:31:58.742202   79869 notify.go:220] Checking for updates...
	I0829 19:31:58.744035   79869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:31:58.745212   79869 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:31:58.746330   79869 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:31:58.747599   79869 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:31:58.748625   79869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:31:58.749897   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:31:58.750353   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.750402   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.765128   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I0829 19:31:58.765502   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.765933   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.765952   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.766302   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.766478   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.768195   79869 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0829 19:31:58.769230   79869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:31:58.769562   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.769599   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.783969   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42955
	I0829 19:31:58.784329   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.784794   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.784809   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.785130   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.785335   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.821467   79869 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:31:58.822695   79869 start.go:297] selected driver: kvm2
	I0829 19:31:58.822708   79869 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.822845   79869 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:31:58.823799   79869 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.823887   79869 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:31:58.839098   79869 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:31:58.839445   79869 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:31:58.839504   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:31:58.839519   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:31:58.839556   79869 start.go:340] cluster config:
	{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.839650   79869 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.841263   79869 out.go:177] * Starting "old-k8s-version-467349" primary control-plane node in "old-k8s-version-467349" cluster
	I0829 19:31:58.842265   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:31:58.842301   79869 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:31:58.842310   79869 cache.go:56] Caching tarball of preloaded images
	I0829 19:31:58.842386   79869 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:31:58.842396   79869 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0829 19:31:58.842476   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:31:58.842637   79869 start.go:360] acquireMachinesLock for old-k8s-version-467349: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:35:09.990963   79869 start.go:364] duration metric: took 3m11.14829615s to acquireMachinesLock for "old-k8s-version-467349"
	I0829 19:35:09.991026   79869 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:09.991035   79869 fix.go:54] fixHost starting: 
	I0829 19:35:09.991429   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:09.991472   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:10.011456   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0829 19:35:10.011867   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:10.012413   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:35:10.012445   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:10.012752   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:10.012960   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:10.013132   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetState
	I0829 19:35:10.014878   79869 fix.go:112] recreateIfNeeded on old-k8s-version-467349: state=Stopped err=<nil>
	I0829 19:35:10.014907   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	W0829 19:35:10.015055   79869 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:10.016684   79869 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467349" ...
	I0829 19:35:10.017965   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .Start
	I0829 19:35:10.018195   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring networks are active...
	I0829 19:35:10.018992   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network default is active
	I0829 19:35:10.019360   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network mk-old-k8s-version-467349 is active
	I0829 19:35:10.019708   79869 main.go:141] libmachine: (old-k8s-version-467349) Getting domain xml...
	I0829 19:35:10.020408   79869 main.go:141] libmachine: (old-k8s-version-467349) Creating domain...
	I0829 19:35:11.298443   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting to get IP...
	I0829 19:35:11.299521   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.300063   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.300152   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.300048   80714 retry.go:31] will retry after 253.519755ms: waiting for machine to come up
	I0829 19:35:11.555694   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.556242   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.556274   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.556187   80714 retry.go:31] will retry after 375.22671ms: waiting for machine to come up
	I0829 19:35:11.932780   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.933206   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.933233   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.933176   80714 retry.go:31] will retry after 329.139276ms: waiting for machine to come up
	I0829 19:35:12.263804   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.264471   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.264501   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.264437   80714 retry.go:31] will retry after 434.457682ms: waiting for machine to come up
	I0829 19:35:12.701184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.701773   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.701805   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.701691   80714 retry.go:31] will retry after 555.961608ms: waiting for machine to come up
	I0829 19:35:13.259670   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:13.260159   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:13.260184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:13.260080   80714 retry.go:31] will retry after 814.491179ms: waiting for machine to come up
	I0829 19:35:14.076091   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.076599   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.076622   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.076549   80714 retry.go:31] will retry after 864.469682ms: waiting for machine to come up
	I0829 19:35:14.942675   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.943123   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.943154   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.943068   80714 retry.go:31] will retry after 1.062037578s: waiting for machine to come up
	I0829 19:35:16.006750   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:16.007301   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:16.007336   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:16.007212   80714 retry.go:31] will retry after 1.22747505s: waiting for machine to come up
	I0829 19:35:17.236788   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:17.237262   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:17.237291   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:17.237216   80714 retry.go:31] will retry after 1.663870598s: waiting for machine to come up
	I0829 19:35:18.903082   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:18.903668   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:18.903691   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:18.903624   80714 retry.go:31] will retry after 2.012998698s: waiting for machine to come up
	I0829 19:35:20.918657   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:20.919143   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:20.919179   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:20.919066   80714 retry.go:31] will retry after 2.674645507s: waiting for machine to come up
	I0829 19:35:23.595218   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:23.595658   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:23.595685   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:23.595633   80714 retry.go:31] will retry after 3.052784769s: waiting for machine to come up
	I0829 19:35:26.649643   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650117   79869 main.go:141] libmachine: (old-k8s-version-467349) Found IP for machine: 192.168.72.112
	I0829 19:35:26.650146   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserving static IP address...
	I0829 19:35:26.650161   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has current primary IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650553   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.650579   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserved static IP address: 192.168.72.112
	I0829 19:35:26.650600   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | skip adding static IP to network mk-old-k8s-version-467349 - found existing host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"}
	I0829 19:35:26.650611   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting for SSH to be available...
	I0829 19:35:26.650640   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Getting to WaitForSSH function...
	I0829 19:35:26.653157   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653509   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.653528   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653667   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH client type: external
	I0829 19:35:26.653690   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa (-rw-------)
	I0829 19:35:26.653724   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:26.653741   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | About to run SSH command:
	I0829 19:35:26.653755   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | exit 0
	I0829 19:35:26.778126   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:26.778436   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetConfigRaw
	I0829 19:35:26.779002   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:26.781392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.781745   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.781778   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.782006   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:35:26.782229   79869 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:26.782249   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:26.782509   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.784806   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785130   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.785148   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785300   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.785462   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785611   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785799   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.785923   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.786126   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.786138   79869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:26.886223   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:26.886256   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886522   79869 buildroot.go:166] provisioning hostname "old-k8s-version-467349"
	I0829 19:35:26.886563   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886756   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.889874   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890304   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.890324   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890471   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.890655   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890821   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890969   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.891131   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.891333   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.891348   79869 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467349 && echo "old-k8s-version-467349" | sudo tee /etc/hostname
	I0829 19:35:27.007493   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467349
	
	I0829 19:35:27.007535   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.010202   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010526   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.010548   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010737   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.010913   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011080   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011225   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.011395   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.011548   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.011564   79869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467349/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:27.123357   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:27.123385   79869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:27.123436   79869 buildroot.go:174] setting up certificates
	I0829 19:35:27.123445   79869 provision.go:84] configureAuth start
	I0829 19:35:27.123455   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:27.123760   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.126486   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.126819   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.126857   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.127013   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.129089   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129404   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.129429   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129554   79869 provision.go:143] copyHostCerts
	I0829 19:35:27.129614   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:27.129636   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:27.129704   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:27.129825   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:27.129840   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:27.129871   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:27.129946   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:27.129956   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:27.129982   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:27.130043   79869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467349 san=[127.0.0.1 192.168.72.112 localhost minikube old-k8s-version-467349]
	I0829 19:35:27.190556   79869 provision.go:177] copyRemoteCerts
	I0829 19:35:27.190610   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:27.190667   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.193785   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194205   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.194243   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194406   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.194620   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.194788   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.194962   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.276099   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:27.299820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 19:35:27.323625   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:27.347943   79869 provision.go:87] duration metric: took 224.487094ms to configureAuth
	I0829 19:35:27.347970   79869 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:27.348140   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:35:27.348203   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.351042   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.351420   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351654   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.351860   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352030   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352159   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.352321   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.352487   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.352504   79869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:27.565849   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:27.565874   79869 machine.go:96] duration metric: took 783.631791ms to provisionDockerMachine
	I0829 19:35:27.565886   79869 start.go:293] postStartSetup for "old-k8s-version-467349" (driver="kvm2")
	I0829 19:35:27.565897   79869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:27.565935   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.566274   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:27.566332   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.568900   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569225   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.569258   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569424   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.569613   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.569795   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.569961   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.648057   79869 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:27.651955   79869 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:27.651984   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:27.652057   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:27.652167   79869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:27.652311   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:27.660961   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:27.684179   79869 start.go:296] duration metric: took 118.281042ms for postStartSetup
	I0829 19:35:27.684251   79869 fix.go:56] duration metric: took 17.69321583s for fixHost
	I0829 19:35:27.684277   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.686877   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687235   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.687266   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687429   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.687615   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687751   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687863   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.687994   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.688202   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.688220   79869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:27.786754   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960127.745017542
	
	I0829 19:35:27.786773   79869 fix.go:216] guest clock: 1724960127.745017542
	I0829 19:35:27.786780   79869 fix.go:229] Guest: 2024-08-29 19:35:27.745017542 +0000 UTC Remote: 2024-08-29 19:35:27.684258077 +0000 UTC m=+208.981895804 (delta=60.759465ms)
	I0829 19:35:27.786798   79869 fix.go:200] guest clock delta is within tolerance: 60.759465ms
	I0829 19:35:27.786803   79869 start.go:83] releasing machines lock for "old-k8s-version-467349", held for 17.795804036s
	I0829 19:35:27.786823   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.787066   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.789617   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.789937   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.789967   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.790124   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790514   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790689   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790781   79869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:27.790827   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.790912   79869 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:27.790937   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.793406   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793495   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793732   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793762   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793781   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793821   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793910   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794075   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794076   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794242   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794419   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.794435   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794646   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794811   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.910665   79869 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:27.916917   79869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:28.063525   79869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:28.070848   79869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:28.070907   79869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:28.089204   79869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:28.089226   79869 start.go:495] detecting cgroup driver to use...
	I0829 19:35:28.089291   79869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:28.108528   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:28.122248   79869 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:28.122353   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:28.143014   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:28.159322   79869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:28.281356   79869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:28.445101   79869 docker.go:233] disabling docker service ...
	I0829 19:35:28.445162   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:28.460437   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:28.474849   79869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:28.609747   79869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:28.734733   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:28.748605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:28.766945   79869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 19:35:28.767014   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.776535   79869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:28.776598   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.787050   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.797552   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.807575   79869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:28.818319   79869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:28.827289   79869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:28.827342   79869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:28.839995   79869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:28.849779   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:28.979701   79869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:29.092264   79869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:29.092344   79869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:29.097310   79869 start.go:563] Will wait 60s for crictl version
	I0829 19:35:29.097366   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:29.101080   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:29.146142   79869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:29.146228   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.176037   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.210024   79869 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 19:35:29.211072   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:29.214489   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.214897   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:29.214932   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.215196   79869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:29.219742   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:29.233815   79869 kubeadm.go:883] updating cluster {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:29.233934   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:35:29.233994   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:29.281512   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:29.281579   79869 ssh_runner.go:195] Run: which lz4
	I0829 19:35:29.285825   79869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:29.290303   79869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:29.290349   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 19:35:30.843642   79869 crio.go:462] duration metric: took 1.557868582s to copy over tarball
	I0829 19:35:30.843714   79869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:33.827965   79869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984226389s)
	I0829 19:35:33.827993   79869 crio.go:469] duration metric: took 2.98432047s to extract the tarball
	I0829 19:35:33.828004   79869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:33.869606   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:33.902753   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:33.902782   79869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:33.902862   79869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.902867   79869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.902869   79869 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.902882   79869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:33.903054   79869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.903000   79869 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 19:35:33.902955   79869 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.902978   79869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.904938   79869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904960   79869 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 19:35:33.904917   79869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.904920   79869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.159604   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 19:35:34.195935   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.208324   79869 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 19:35:34.208373   79869 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 19:35:34.208414   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.229776   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.231728   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.241303   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.243523   79869 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 19:35:34.243572   79869 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.243589   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.243612   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.256377   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.291584   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.339295   79869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 19:35:34.339344   79869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.339396   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364510   79869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 19:35:34.364559   79869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.364565   79869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 19:35:34.364598   79869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.364608   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364636   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.364641   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.364642   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.370545   79869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 19:35:34.370580   79869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.370621   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.401578   79869 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 19:35:34.401628   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.401634   79869 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.401651   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.401669   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.452408   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.452472   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.452530   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.452479   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.498680   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.502698   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.502722   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.608235   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.608332   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 19:35:34.608345   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.608302   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.647702   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.647744   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.647784   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.771634   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.771691   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 19:35:34.771642   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.771742   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 19:35:34.771818   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 19:35:34.790517   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.826666   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 19:35:34.832449   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 19:35:34.850172   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 19:35:35.112084   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:35.251873   79869 cache_images.go:92] duration metric: took 1.34907399s to LoadCachedImages
	W0829 19:35:35.251967   79869 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0829 19:35:35.251984   79869 kubeadm.go:934] updating node { 192.168.72.112 8443 v1.20.0 crio true true} ...
	I0829 19:35:35.252130   79869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467349 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:35.252215   79869 ssh_runner.go:195] Run: crio config
	I0829 19:35:35.307174   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:35:35.307205   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:35.307229   79869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:35.307253   79869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467349 NodeName:old-k8s-version-467349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 19:35:35.307421   79869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467349"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:35.307498   79869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 19:35:35.317493   79869 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:35.317574   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:35.327102   79869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 19:35:35.343936   79869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:35.362420   79869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 19:35:35.379862   79869 ssh_runner.go:195] Run: grep 192.168.72.112	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:35.383595   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:35.396175   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:35.513069   79869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:35.535454   79869 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349 for IP: 192.168.72.112
	I0829 19:35:35.535481   79869 certs.go:194] generating shared ca certs ...
	I0829 19:35:35.535500   79869 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:35.535693   79869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:35.535751   79869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:35.535764   79869 certs.go:256] generating profile certs ...
	I0829 19:35:35.535885   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.key
	I0829 19:35:35.535962   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key.b97fdb0f
	I0829 19:35:35.536010   79869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key
	I0829 19:35:35.536160   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:35.536198   79869 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:35.536212   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:35.536255   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:35.536289   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:35.536345   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:35.536403   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:35.537270   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:35.573137   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:35.605232   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:35.633800   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:35.681773   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 19:35:35.711207   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:35.748040   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:35.774144   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:35:35.805029   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:35.833761   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:35.856820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:35.883402   79869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:35.902258   79869 ssh_runner.go:195] Run: openssl version
	I0829 19:35:35.908223   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:35.919106   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923368   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923414   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.930431   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:35.941856   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:35.953186   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957279   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957351   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.963886   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:35.976058   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:35.986836   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991417   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991482   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.997160   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:36.009731   79869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:36.015343   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:36.022897   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:36.028976   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:36.036658   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:36.042513   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:36.048085   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:36.053863   79869 kubeadm.go:392] StartCluster: {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:36.053944   79869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:36.053999   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.099158   79869 cri.go:89] found id: ""
	I0829 19:35:36.099230   79869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:36.109678   79869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:36.109701   79869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:36.109751   79869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:36.119674   79869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:36.120829   79869 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-467349" does not appear in /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:35:36.121495   79869 kubeconfig.go:62] /home/jenkins/minikube-integration/19531-13056/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-467349" cluster setting kubeconfig missing "old-k8s-version-467349" context setting]
	I0829 19:35:36.122505   79869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:36.221053   79869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:36.232505   79869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.112
	I0829 19:35:36.232550   79869 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:36.232562   79869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:36.232612   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.272228   79869 cri.go:89] found id: ""
	I0829 19:35:36.272290   79869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:36.290945   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:36.301665   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:36.301688   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:36.301740   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:35:36.311828   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:36.311882   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:36.322539   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:35:36.331879   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:36.331947   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:36.343057   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.352806   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:36.352867   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.362158   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:35:36.372280   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:36.372355   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:36.383178   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:36.393699   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:36.514064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.332360   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.570906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.665203   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.764043   79869 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:37.764146   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:38.264990   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:38.764741   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.264314   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.765085   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.264910   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.264207   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.764841   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.265060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.764958   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:43.264971   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:43.764674   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.264893   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.764345   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.264234   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.764985   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.265107   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.764222   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.264350   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.764787   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:48.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:48.764746   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.264755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.764703   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.264240   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.764284   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.265111   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.764316   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.264213   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.764295   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:53.264451   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:53.764946   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.265076   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.764273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.264844   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.764622   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.765120   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.265199   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.764610   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:58.264296   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:58.765060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.265033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.765033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.265144   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.764425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.764672   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.264962   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:03.264407   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:03.764403   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.764546   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.265205   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.764700   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.264837   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.764871   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.264506   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.765230   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:08.265050   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:08.764431   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.264876   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.764481   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.265100   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.764720   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.264283   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.764890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.264425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.764965   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:13.264557   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:13.765038   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.264547   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.764878   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.264485   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.765114   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.264694   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.764599   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.264540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.764523   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:18.264855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:18.764781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.264280   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.764653   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.264908   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.764855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.265180   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.764470   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.264751   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.765034   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:23.264498   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:23.764384   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.265090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.765183   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.264966   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.764429   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.264774   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.765090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.264524   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.764810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:28.264541   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:28.764771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.764735   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.265228   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.764328   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.264312   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.764627   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.264891   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.765104   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:33.264462   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:33.764540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.265004   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.764934   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.264439   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.764982   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.264780   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.765081   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.264865   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.764612   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:37.764705   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:37.803674   79869 cri.go:89] found id: ""
	I0829 19:36:37.803704   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.803715   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:37.803724   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:37.803783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:37.836465   79869 cri.go:89] found id: ""
	I0829 19:36:37.836494   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.836504   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:37.836512   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:37.836574   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:37.870224   79869 cri.go:89] found id: ""
	I0829 19:36:37.870248   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.870256   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:37.870262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:37.870326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:37.904152   79869 cri.go:89] found id: ""
	I0829 19:36:37.904179   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.904187   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:37.904194   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:37.904267   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:37.939182   79869 cri.go:89] found id: ""
	I0829 19:36:37.939211   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.939220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:37.939228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:37.939293   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:37.975761   79869 cri.go:89] found id: ""
	I0829 19:36:37.975790   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.975800   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:37.975808   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:37.975910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:38.008407   79869 cri.go:89] found id: ""
	I0829 19:36:38.008430   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.008437   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:38.008444   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:38.008497   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:38.041327   79869 cri.go:89] found id: ""
	I0829 19:36:38.041360   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.041370   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:38.041381   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:38.041395   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:38.091167   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:38.091214   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:38.105093   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:38.105126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:38.227564   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:38.227599   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:38.227616   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:38.298287   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:38.298327   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:40.836221   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:40.849288   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:40.849357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:40.882705   79869 cri.go:89] found id: ""
	I0829 19:36:40.882732   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.882739   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:40.882745   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:40.882791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:40.917639   79869 cri.go:89] found id: ""
	I0829 19:36:40.917667   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.917679   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:40.917687   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:40.917738   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:40.953804   79869 cri.go:89] found id: ""
	I0829 19:36:40.953843   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.953854   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:40.953863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:40.953925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:40.987341   79869 cri.go:89] found id: ""
	I0829 19:36:40.987376   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.987388   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:40.987396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:40.987462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:41.026247   79869 cri.go:89] found id: ""
	I0829 19:36:41.026277   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.026290   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:41.026303   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:41.026372   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:41.064160   79869 cri.go:89] found id: ""
	I0829 19:36:41.064185   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.064194   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:41.064201   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:41.064278   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:41.115081   79869 cri.go:89] found id: ""
	I0829 19:36:41.115113   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.115124   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:41.115131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:41.115206   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:41.165472   79869 cri.go:89] found id: ""
	I0829 19:36:41.165501   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.165511   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:41.165521   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:41.165536   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:41.219322   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:41.219357   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:41.232410   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:41.232443   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:41.296216   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:41.296235   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:41.296246   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:41.375784   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:41.375824   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:43.914181   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:43.926643   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:43.926716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:43.963266   79869 cri.go:89] found id: ""
	I0829 19:36:43.963289   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.963297   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:43.963303   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:43.963350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:43.998886   79869 cri.go:89] found id: ""
	I0829 19:36:43.998917   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.998926   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:43.998930   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:43.998975   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:44.033142   79869 cri.go:89] found id: ""
	I0829 19:36:44.033174   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.033183   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:44.033189   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:44.033244   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:44.066986   79869 cri.go:89] found id: ""
	I0829 19:36:44.067019   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.067031   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:44.067038   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:44.067106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:44.100228   79869 cri.go:89] found id: ""
	I0829 19:36:44.100261   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.100272   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:44.100279   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:44.100340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:44.134511   79869 cri.go:89] found id: ""
	I0829 19:36:44.134536   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.134543   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:44.134549   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:44.134615   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:44.170586   79869 cri.go:89] found id: ""
	I0829 19:36:44.170619   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.170631   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:44.170639   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:44.170692   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:44.205349   79869 cri.go:89] found id: ""
	I0829 19:36:44.205377   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.205388   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:44.205398   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:44.205413   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:44.218874   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:44.218903   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:44.294221   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:44.294241   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:44.294253   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:44.373258   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:44.373293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:44.414355   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:44.414384   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:46.964371   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:46.976756   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:46.976827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:47.009512   79869 cri.go:89] found id: ""
	I0829 19:36:47.009537   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.009547   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:47.009555   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:47.009608   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:47.042141   79869 cri.go:89] found id: ""
	I0829 19:36:47.042177   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.042190   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:47.042199   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:47.042265   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:47.074680   79869 cri.go:89] found id: ""
	I0829 19:36:47.074707   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.074718   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:47.074726   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:47.074783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:47.107014   79869 cri.go:89] found id: ""
	I0829 19:36:47.107042   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.107051   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:47.107059   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:47.107107   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:47.139770   79869 cri.go:89] found id: ""
	I0829 19:36:47.139795   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.139804   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:47.139810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:47.139862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:47.174463   79869 cri.go:89] found id: ""
	I0829 19:36:47.174502   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.174521   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:47.174532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:47.174580   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:47.206935   79869 cri.go:89] found id: ""
	I0829 19:36:47.206958   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.206966   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:47.206972   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:47.207035   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:47.250798   79869 cri.go:89] found id: ""
	I0829 19:36:47.250822   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.250829   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:47.250836   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:47.250847   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:47.320803   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:47.320824   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:47.320850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:47.394344   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:47.394379   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:47.439451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:47.439481   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:47.491070   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:47.491106   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:50.006196   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:50.020169   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:50.020259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:50.059323   79869 cri.go:89] found id: ""
	I0829 19:36:50.059353   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.059373   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:50.059380   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:50.059442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:50.095389   79869 cri.go:89] found id: ""
	I0829 19:36:50.095419   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.095430   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:50.095437   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:50.095499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:50.128133   79869 cri.go:89] found id: ""
	I0829 19:36:50.128162   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.128173   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:50.128180   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:50.128238   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:50.160999   79869 cri.go:89] found id: ""
	I0829 19:36:50.161021   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.161030   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:50.161035   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:50.161081   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:50.195246   79869 cri.go:89] found id: ""
	I0829 19:36:50.195268   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.195276   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:50.195282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:50.195329   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:50.229232   79869 cri.go:89] found id: ""
	I0829 19:36:50.229263   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.229273   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:50.229280   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:50.229340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:50.265141   79869 cri.go:89] found id: ""
	I0829 19:36:50.265169   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.265180   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:50.265188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:50.265251   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:50.299896   79869 cri.go:89] found id: ""
	I0829 19:36:50.299928   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.299940   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:50.299949   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:50.299963   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:50.313408   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:50.313431   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:50.382019   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:50.382037   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:50.382049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:50.462174   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:50.462211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:50.499944   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:50.499971   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.050299   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:53.064866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:53.064963   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:53.098468   79869 cri.go:89] found id: ""
	I0829 19:36:53.098492   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.098500   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:53.098506   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:53.098555   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:53.130323   79869 cri.go:89] found id: ""
	I0829 19:36:53.130354   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.130377   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:53.130385   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:53.130445   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:53.175911   79869 cri.go:89] found id: ""
	I0829 19:36:53.175941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.175951   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:53.175968   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:53.176033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:53.209834   79869 cri.go:89] found id: ""
	I0829 19:36:53.209865   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.209874   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:53.209881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:53.209959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:53.246277   79869 cri.go:89] found id: ""
	I0829 19:36:53.246322   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.246332   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:53.246340   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:53.246401   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:53.283911   79869 cri.go:89] found id: ""
	I0829 19:36:53.283941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.283953   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:53.283962   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:53.284024   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:53.315217   79869 cri.go:89] found id: ""
	I0829 19:36:53.315247   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.315257   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:53.315265   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:53.315328   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:53.348341   79869 cri.go:89] found id: ""
	I0829 19:36:53.348392   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.348405   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:53.348417   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:53.348436   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.399841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:53.399879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:53.414453   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:53.414491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:53.490003   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:53.490023   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:53.490042   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:53.565162   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:53.565198   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:56.106051   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:56.119263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:56.119345   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:56.160104   79869 cri.go:89] found id: ""
	I0829 19:36:56.160131   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.160138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:56.160144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:56.160192   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:56.196028   79869 cri.go:89] found id: ""
	I0829 19:36:56.196054   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.196062   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:56.196067   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:56.196113   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:56.229503   79869 cri.go:89] found id: ""
	I0829 19:36:56.229532   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.229539   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:56.229553   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:56.229602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:56.263904   79869 cri.go:89] found id: ""
	I0829 19:36:56.263934   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.263944   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:56.263951   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:56.264013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:56.295579   79869 cri.go:89] found id: ""
	I0829 19:36:56.295607   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.295618   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:56.295625   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:56.295680   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:56.328514   79869 cri.go:89] found id: ""
	I0829 19:36:56.328548   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.328556   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:56.328563   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:56.328620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:56.361388   79869 cri.go:89] found id: ""
	I0829 19:36:56.361418   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.361426   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:56.361431   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:56.361508   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:56.393312   79869 cri.go:89] found id: ""
	I0829 19:36:56.393345   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.393354   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:56.393362   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:56.393372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:56.446431   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:56.446472   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:56.459086   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:56.459112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:56.525526   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:56.525554   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:56.525569   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:56.609554   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:56.609592   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:59.148291   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:59.162462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:59.162524   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:59.199732   79869 cri.go:89] found id: ""
	I0829 19:36:59.199761   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.199771   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:59.199780   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:59.199861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:59.232285   79869 cri.go:89] found id: ""
	I0829 19:36:59.232324   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.232335   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:59.232345   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:59.232415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:59.266424   79869 cri.go:89] found id: ""
	I0829 19:36:59.266452   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.266463   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:59.266471   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:59.266536   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:59.306707   79869 cri.go:89] found id: ""
	I0829 19:36:59.306733   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.306742   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:59.306748   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:59.306807   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:59.345114   79869 cri.go:89] found id: ""
	I0829 19:36:59.345144   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.345154   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:59.345162   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:59.345225   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:59.382940   79869 cri.go:89] found id: ""
	I0829 19:36:59.382963   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.382971   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:59.382977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:59.383031   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:59.420066   79869 cri.go:89] found id: ""
	I0829 19:36:59.420088   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.420095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:59.420101   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:59.420146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:59.457355   79869 cri.go:89] found id: ""
	I0829 19:36:59.457377   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.457385   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:59.457392   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:59.457409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:59.528868   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:59.528893   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:59.528908   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:59.612849   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:59.612886   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:59.649036   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:59.649064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:59.703071   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:59.703105   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.216020   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:02.229270   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:02.229351   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:02.266857   79869 cri.go:89] found id: ""
	I0829 19:37:02.266885   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.266897   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:02.266904   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:02.266967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:02.304473   79869 cri.go:89] found id: ""
	I0829 19:37:02.304501   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.304512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:02.304520   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:02.304590   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:02.338219   79869 cri.go:89] found id: ""
	I0829 19:37:02.338244   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.338253   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:02.338261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:02.338323   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:02.370974   79869 cri.go:89] found id: ""
	I0829 19:37:02.371006   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.371017   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:02.371025   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:02.371084   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:02.405871   79869 cri.go:89] found id: ""
	I0829 19:37:02.405895   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.405902   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:02.405908   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:02.405955   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:02.438516   79869 cri.go:89] found id: ""
	I0829 19:37:02.438543   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.438554   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:02.438568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:02.438630   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:02.471180   79869 cri.go:89] found id: ""
	I0829 19:37:02.471205   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.471213   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:02.471218   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:02.471276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:02.503203   79869 cri.go:89] found id: ""
	I0829 19:37:02.503227   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.503237   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:02.503248   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:02.503262   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:02.555303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:02.555337   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.567903   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:02.567927   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:02.641377   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:02.641403   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:02.641418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:02.717475   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:02.717522   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:05.257326   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:05.270641   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:05.270717   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:05.303873   79869 cri.go:89] found id: ""
	I0829 19:37:05.303901   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.303909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:05.303915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:05.303959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:05.345153   79869 cri.go:89] found id: ""
	I0829 19:37:05.345176   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.345184   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:05.345189   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:05.345245   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:05.379032   79869 cri.go:89] found id: ""
	I0829 19:37:05.379059   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.379067   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:05.379073   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:05.379135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:05.412432   79869 cri.go:89] found id: ""
	I0829 19:37:05.412465   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.412476   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:05.412484   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:05.412538   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:05.445441   79869 cri.go:89] found id: ""
	I0829 19:37:05.445464   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.445471   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:05.445477   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:05.445527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:05.478921   79869 cri.go:89] found id: ""
	I0829 19:37:05.478949   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.478957   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:05.478964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:05.479011   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:05.509821   79869 cri.go:89] found id: ""
	I0829 19:37:05.509849   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.509859   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:05.509866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:05.509924   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:05.541409   79869 cri.go:89] found id: ""
	I0829 19:37:05.541435   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.541443   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:05.541451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:05.541464   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.590569   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:05.590601   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:05.604071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:05.604101   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:05.685233   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:05.685262   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:05.685277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:05.761082   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:05.761112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.299816   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:08.312964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:08.313037   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:08.344710   79869 cri.go:89] found id: ""
	I0829 19:37:08.344737   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.344745   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:08.344755   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:08.344820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:08.378185   79869 cri.go:89] found id: ""
	I0829 19:37:08.378210   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.378217   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:08.378223   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:08.378272   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:08.410619   79869 cri.go:89] found id: ""
	I0829 19:37:08.410645   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.410663   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:08.410670   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:08.410729   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:08.445494   79869 cri.go:89] found id: ""
	I0829 19:37:08.445522   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.445531   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:08.445540   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:08.445601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:08.478225   79869 cri.go:89] found id: ""
	I0829 19:37:08.478249   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.478258   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:08.478263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:08.478311   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:08.512006   79869 cri.go:89] found id: ""
	I0829 19:37:08.512032   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.512042   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:08.512049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:08.512111   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:08.546800   79869 cri.go:89] found id: ""
	I0829 19:37:08.546831   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.546841   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:08.546848   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:08.546911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:08.580353   79869 cri.go:89] found id: ""
	I0829 19:37:08.580383   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.580394   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:08.580405   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:08.580418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:08.661004   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:08.661041   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.708548   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:08.708581   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:08.761385   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:08.761418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:08.774365   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:08.774392   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:08.839864   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.340781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:11.353417   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:11.353492   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:11.388836   79869 cri.go:89] found id: ""
	I0829 19:37:11.388864   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.388873   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:11.388879   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:11.388925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:11.429655   79869 cri.go:89] found id: ""
	I0829 19:37:11.429685   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.429695   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:11.429703   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:11.429761   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:11.462122   79869 cri.go:89] found id: ""
	I0829 19:37:11.462157   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.462166   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:11.462174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:11.462236   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:11.495955   79869 cri.go:89] found id: ""
	I0829 19:37:11.495985   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.495996   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:11.496003   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:11.496063   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:11.529394   79869 cri.go:89] found id: ""
	I0829 19:37:11.529427   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.529438   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:11.529446   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:11.529513   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:11.565804   79869 cri.go:89] found id: ""
	I0829 19:37:11.565830   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.565838   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:11.565844   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:11.565903   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:11.601786   79869 cri.go:89] found id: ""
	I0829 19:37:11.601815   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.601825   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:11.601832   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:11.601889   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:11.638213   79869 cri.go:89] found id: ""
	I0829 19:37:11.638234   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.638242   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:11.638250   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:11.638260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:11.651085   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:11.651113   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:11.716834   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.716858   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:11.716872   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:11.804266   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:11.804310   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:11.846655   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:11.846684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.408512   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:14.420973   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:14.421033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:14.456516   79869 cri.go:89] found id: ""
	I0829 19:37:14.456540   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.456548   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:14.456553   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:14.456604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:14.489480   79869 cri.go:89] found id: ""
	I0829 19:37:14.489502   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.489512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:14.489517   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:14.489562   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:14.521821   79869 cri.go:89] found id: ""
	I0829 19:37:14.521849   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.521857   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:14.521863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:14.521911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:14.557084   79869 cri.go:89] found id: ""
	I0829 19:37:14.557116   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.557125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:14.557131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:14.557180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:14.590979   79869 cri.go:89] found id: ""
	I0829 19:37:14.591009   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.591019   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:14.591027   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:14.591088   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:14.624022   79869 cri.go:89] found id: ""
	I0829 19:37:14.624047   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.624057   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:14.624066   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:14.624131   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:14.656100   79869 cri.go:89] found id: ""
	I0829 19:37:14.656133   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.656145   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:14.656153   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:14.656214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:14.694241   79869 cri.go:89] found id: ""
	I0829 19:37:14.694276   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.694289   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:14.694302   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:14.694317   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.748276   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:14.748312   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:14.761340   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:14.761361   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:14.834815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:14.834842   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:14.834857   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:14.909857   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:14.909898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.453264   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:17.466704   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:17.466776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:17.500163   79869 cri.go:89] found id: ""
	I0829 19:37:17.500193   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.500205   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:17.500212   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:17.500269   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:17.532155   79869 cri.go:89] found id: ""
	I0829 19:37:17.532182   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.532192   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:17.532200   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:17.532259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:17.564710   79869 cri.go:89] found id: ""
	I0829 19:37:17.564737   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.564747   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:17.564754   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:17.564816   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:17.597056   79869 cri.go:89] found id: ""
	I0829 19:37:17.597091   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.597103   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:17.597111   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:17.597173   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:17.633398   79869 cri.go:89] found id: ""
	I0829 19:37:17.633424   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.633434   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:17.633442   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:17.633506   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:17.666201   79869 cri.go:89] found id: ""
	I0829 19:37:17.666243   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.666254   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:17.666262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:17.666324   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:17.700235   79869 cri.go:89] found id: ""
	I0829 19:37:17.700259   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.700266   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:17.700273   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:17.700320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:17.732060   79869 cri.go:89] found id: ""
	I0829 19:37:17.732090   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.732100   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:17.732110   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:17.732126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:17.747071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:17.747107   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:17.816644   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:17.816665   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:17.816677   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:17.895084   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:17.895134   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.935093   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:17.935125   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:20.484693   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:20.497977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:20.498043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:20.531361   79869 cri.go:89] found id: ""
	I0829 19:37:20.531389   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.531400   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:20.531408   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:20.531469   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:20.569556   79869 cri.go:89] found id: ""
	I0829 19:37:20.569583   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.569594   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:20.569603   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:20.569668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:20.602350   79869 cri.go:89] found id: ""
	I0829 19:37:20.602377   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.602385   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:20.602391   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:20.602448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:20.637274   79869 cri.go:89] found id: ""
	I0829 19:37:20.637305   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.637319   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:20.637327   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:20.637388   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:20.686169   79869 cri.go:89] found id: ""
	I0829 19:37:20.686196   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.686204   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:20.686210   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:20.686257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:20.722745   79869 cri.go:89] found id: ""
	I0829 19:37:20.722775   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.722786   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:20.722794   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:20.722856   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:20.757314   79869 cri.go:89] found id: ""
	I0829 19:37:20.757337   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.757344   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:20.757349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:20.757398   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:20.790802   79869 cri.go:89] found id: ""
	I0829 19:37:20.790834   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.790844   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:20.790855   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:20.790870   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:20.840866   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:20.840898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:20.854053   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:20.854098   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:20.921717   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:20.921746   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:20.921761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:21.003362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:21.003398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:23.541356   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:23.554621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:23.554699   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:23.588155   79869 cri.go:89] found id: ""
	I0829 19:37:23.588190   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.588199   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:23.588207   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:23.588273   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:23.622917   79869 cri.go:89] found id: ""
	I0829 19:37:23.622945   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.622954   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:23.622960   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:23.623016   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:23.658615   79869 cri.go:89] found id: ""
	I0829 19:37:23.658648   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.658657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:23.658663   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:23.658720   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:23.693196   79869 cri.go:89] found id: ""
	I0829 19:37:23.693224   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.693234   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:23.693242   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:23.693309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:23.728285   79869 cri.go:89] found id: ""
	I0829 19:37:23.728317   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.728328   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:23.728336   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:23.728399   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:23.763713   79869 cri.go:89] found id: ""
	I0829 19:37:23.763741   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.763751   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:23.763759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:23.763812   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:23.797776   79869 cri.go:89] found id: ""
	I0829 19:37:23.797801   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.797809   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:23.797814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:23.797863   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:23.832108   79869 cri.go:89] found id: ""
	I0829 19:37:23.832139   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.832151   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:23.832161   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:23.832175   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:23.880460   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:23.880490   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:23.893251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:23.893280   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:23.962079   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:23.962127   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:23.962140   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:24.048048   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:24.048088   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:26.593169   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:26.606349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:26.606426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:26.643119   79869 cri.go:89] found id: ""
	I0829 19:37:26.643143   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.643155   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:26.643161   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:26.643216   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:26.681555   79869 cri.go:89] found id: ""
	I0829 19:37:26.681579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.681591   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:26.681597   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:26.681655   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:26.718440   79869 cri.go:89] found id: ""
	I0829 19:37:26.718469   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.718479   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:26.718486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:26.718549   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:26.755249   79869 cri.go:89] found id: ""
	I0829 19:37:26.755274   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.755284   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:26.755292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:26.755356   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:26.790554   79869 cri.go:89] found id: ""
	I0829 19:37:26.790579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.790590   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:26.790597   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:26.790665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:26.826492   79869 cri.go:89] found id: ""
	I0829 19:37:26.826521   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.826530   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:26.826537   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:26.826600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:26.863456   79869 cri.go:89] found id: ""
	I0829 19:37:26.863487   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.863499   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:26.863508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:26.863579   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:26.897637   79869 cri.go:89] found id: ""
	I0829 19:37:26.897670   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.897683   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:26.897694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:26.897709   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:26.978362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:26.978400   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:27.016212   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:27.016245   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:27.078350   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:27.078386   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:27.101701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:27.101744   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:27.186720   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:29.686902   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:29.699814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:29.699885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:29.733867   79869 cri.go:89] found id: ""
	I0829 19:37:29.733893   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.733904   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:29.733911   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:29.733970   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:29.767910   79869 cri.go:89] found id: ""
	I0829 19:37:29.767937   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.767946   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:29.767952   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:29.767998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:29.801085   79869 cri.go:89] found id: ""
	I0829 19:37:29.801109   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.801117   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:29.801122   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:29.801166   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:29.834215   79869 cri.go:89] found id: ""
	I0829 19:37:29.834238   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.834246   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:29.834251   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:29.834307   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:29.872761   79869 cri.go:89] found id: ""
	I0829 19:37:29.872785   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.872793   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:29.872803   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:29.872847   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:29.909354   79869 cri.go:89] found id: ""
	I0829 19:37:29.909385   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.909395   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:29.909408   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:29.909468   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:29.941359   79869 cri.go:89] found id: ""
	I0829 19:37:29.941383   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.941390   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:29.941396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:29.941451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:29.973694   79869 cri.go:89] found id: ""
	I0829 19:37:29.973726   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.973736   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:29.973746   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:29.973761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:30.024863   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:30.024896   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.039092   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:30.039119   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:30.106106   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:30.106128   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:30.106143   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:30.183254   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:30.183289   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:32.722665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:32.736188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:32.736261   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:32.773039   79869 cri.go:89] found id: ""
	I0829 19:37:32.773065   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.773073   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:32.773082   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:32.773144   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:32.818204   79869 cri.go:89] found id: ""
	I0829 19:37:32.818234   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.818245   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:32.818252   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:32.818313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:32.862902   79869 cri.go:89] found id: ""
	I0829 19:37:32.862932   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.862942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:32.862949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:32.863009   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:32.908338   79869 cri.go:89] found id: ""
	I0829 19:37:32.908369   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.908380   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:32.908388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:32.908452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:32.941717   79869 cri.go:89] found id: ""
	I0829 19:37:32.941746   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.941757   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:32.941765   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:32.941827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:32.975777   79869 cri.go:89] found id: ""
	I0829 19:37:32.975806   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.975818   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:32.975827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:32.975885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:33.007518   79869 cri.go:89] found id: ""
	I0829 19:37:33.007551   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.007563   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:33.007570   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:33.007638   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:33.039902   79869 cri.go:89] found id: ""
	I0829 19:37:33.039924   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.039931   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:33.039946   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:33.039958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:33.111691   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:33.111720   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:33.111734   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:33.191036   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:33.191067   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:33.228850   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:33.228882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:33.282314   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:33.282351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:35.796597   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:35.809357   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:35.809437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:35.841747   79869 cri.go:89] found id: ""
	I0829 19:37:35.841774   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.841783   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:35.841792   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:35.841850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:35.875614   79869 cri.go:89] found id: ""
	I0829 19:37:35.875639   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.875650   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:35.875657   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:35.875718   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:35.910547   79869 cri.go:89] found id: ""
	I0829 19:37:35.910571   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.910579   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:35.910585   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:35.910647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:35.949505   79869 cri.go:89] found id: ""
	I0829 19:37:35.949526   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.949533   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:35.949538   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:35.949583   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:35.984331   79869 cri.go:89] found id: ""
	I0829 19:37:35.984369   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.984381   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:35.984388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:35.984451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:36.018870   79869 cri.go:89] found id: ""
	I0829 19:37:36.018897   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.018909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:36.018917   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:36.018976   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:36.053557   79869 cri.go:89] found id: ""
	I0829 19:37:36.053593   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.053603   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:36.053611   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:36.053668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:36.087217   79869 cri.go:89] found id: ""
	I0829 19:37:36.087243   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.087254   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:36.087264   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:36.087282   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:36.141546   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:36.141577   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:36.155496   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:36.155524   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:36.225014   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:36.225038   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:36.225052   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:36.304399   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:36.304442   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:38.842368   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:38.856085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:38.856160   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:38.893989   79869 cri.go:89] found id: ""
	I0829 19:37:38.894016   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.894024   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:38.894030   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:38.894075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:38.926756   79869 cri.go:89] found id: ""
	I0829 19:37:38.926784   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.926792   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:38.926798   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:38.926859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:38.966346   79869 cri.go:89] found id: ""
	I0829 19:37:38.966370   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.966379   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:38.966385   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:38.966442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:39.000266   79869 cri.go:89] found id: ""
	I0829 19:37:39.000291   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.000298   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:39.000307   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:39.000355   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:39.037243   79869 cri.go:89] found id: ""
	I0829 19:37:39.037269   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.037277   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:39.037282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:39.037347   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:39.068823   79869 cri.go:89] found id: ""
	I0829 19:37:39.068852   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.068864   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:39.068872   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:39.068936   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:39.099649   79869 cri.go:89] found id: ""
	I0829 19:37:39.099674   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.099682   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:39.099689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:39.099748   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:39.131764   79869 cri.go:89] found id: ""
	I0829 19:37:39.131786   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.131794   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:39.131802   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:39.131814   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:39.188087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:39.188123   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:39.200989   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:39.201015   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:39.279230   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:39.279257   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:39.279271   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:39.358667   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:39.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:41.897833   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:41.911145   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:41.911219   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:41.947096   79869 cri.go:89] found id: ""
	I0829 19:37:41.947122   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.947133   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:41.947141   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:41.947203   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:41.984267   79869 cri.go:89] found id: ""
	I0829 19:37:41.984301   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.984309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:41.984315   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:41.984384   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:42.018170   79869 cri.go:89] found id: ""
	I0829 19:37:42.018198   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.018209   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:42.018217   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:42.018281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:42.058245   79869 cri.go:89] found id: ""
	I0829 19:37:42.058269   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.058278   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:42.058283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:42.058327   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:42.093182   79869 cri.go:89] found id: ""
	I0829 19:37:42.093214   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.093226   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:42.093233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:42.093299   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:42.126013   79869 cri.go:89] found id: ""
	I0829 19:37:42.126041   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.126050   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:42.126058   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:42.126136   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:42.166568   79869 cri.go:89] found id: ""
	I0829 19:37:42.166660   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.166675   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:42.166683   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:42.166763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:42.204904   79869 cri.go:89] found id: ""
	I0829 19:37:42.204930   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.204938   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:42.204947   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:42.204960   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:42.262487   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:42.262533   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:42.275703   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:42.275730   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:42.341375   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:42.341394   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:42.341408   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:42.420981   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:42.421021   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:44.965267   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:44.979151   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:44.979204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:45.020423   79869 cri.go:89] found id: ""
	I0829 19:37:45.020448   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.020456   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:45.020461   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:45.020521   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:45.058200   79869 cri.go:89] found id: ""
	I0829 19:37:45.058225   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.058233   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:45.058238   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:45.058286   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:45.093886   79869 cri.go:89] found id: ""
	I0829 19:37:45.093909   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.093917   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:45.093923   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:45.093968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:45.127630   79869 cri.go:89] found id: ""
	I0829 19:37:45.127663   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.127674   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:45.127681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:45.127742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:45.160643   79869 cri.go:89] found id: ""
	I0829 19:37:45.160669   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.160679   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:45.160685   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:45.160742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:45.196010   79869 cri.go:89] found id: ""
	I0829 19:37:45.196035   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.196043   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:45.196050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:45.196101   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:45.229297   79869 cri.go:89] found id: ""
	I0829 19:37:45.229375   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.229395   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:45.229405   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:45.229461   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:45.267244   79869 cri.go:89] found id: ""
	I0829 19:37:45.267271   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.267281   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:45.267292   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:45.267306   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:45.280179   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:45.280201   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:45.352318   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:45.352339   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:45.352351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:45.432702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:45.432732   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:45.470540   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:45.470564   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.019771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:48.032745   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:48.032819   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:48.066895   79869 cri.go:89] found id: ""
	I0829 19:37:48.066921   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.066930   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:48.066938   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:48.066998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:48.104824   79869 cri.go:89] found id: ""
	I0829 19:37:48.104853   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.104861   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:48.104866   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:48.104931   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:48.140964   79869 cri.go:89] found id: ""
	I0829 19:37:48.140990   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.140998   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:48.141004   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:48.141051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:48.174550   79869 cri.go:89] found id: ""
	I0829 19:37:48.174578   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.174587   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:48.174593   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:48.174647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:48.207397   79869 cri.go:89] found id: ""
	I0829 19:37:48.207422   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.207430   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:48.207437   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:48.207495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:48.240948   79869 cri.go:89] found id: ""
	I0829 19:37:48.240970   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.240978   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:48.240983   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:48.241027   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:48.281058   79869 cri.go:89] found id: ""
	I0829 19:37:48.281087   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.281095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:48.281100   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:48.281151   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:48.315511   79869 cri.go:89] found id: ""
	I0829 19:37:48.315541   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.315552   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:48.315564   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:48.315580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.367680   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:48.367714   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:48.380251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:48.380285   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:48.449432   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:48.449452   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:48.449467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:48.525529   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:48.525563   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:51.064580   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:51.077351   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:51.077430   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:51.110018   79869 cri.go:89] found id: ""
	I0829 19:37:51.110049   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.110058   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:51.110063   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:51.110138   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:51.143667   79869 cri.go:89] found id: ""
	I0829 19:37:51.143700   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.143711   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:51.143719   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:51.143791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:51.178193   79869 cri.go:89] found id: ""
	I0829 19:37:51.178221   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.178229   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:51.178235   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:51.178285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:51.212323   79869 cri.go:89] found id: ""
	I0829 19:37:51.212352   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.212359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:51.212366   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:51.212413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:51.245724   79869 cri.go:89] found id: ""
	I0829 19:37:51.245745   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.245752   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:51.245758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:51.245832   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:51.278424   79869 cri.go:89] found id: ""
	I0829 19:37:51.278448   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.278456   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:51.278462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:51.278509   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:51.309469   79869 cri.go:89] found id: ""
	I0829 19:37:51.309498   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.309508   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:51.309516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:51.309602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:51.342596   79869 cri.go:89] found id: ""
	I0829 19:37:51.342625   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.342639   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:51.342650   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:51.342664   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:51.394045   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:51.394083   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:51.407902   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:51.407934   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:51.480759   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:51.480782   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:51.480797   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:51.565533   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:51.565570   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.107142   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:54.121083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:54.121141   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:54.156019   79869 cri.go:89] found id: ""
	I0829 19:37:54.156042   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.156050   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:54.156056   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:54.156106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:54.188748   79869 cri.go:89] found id: ""
	I0829 19:37:54.188772   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.188783   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:54.188790   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:54.188851   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:54.222044   79869 cri.go:89] found id: ""
	I0829 19:37:54.222079   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.222112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:54.222132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:54.222214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:54.254710   79869 cri.go:89] found id: ""
	I0829 19:37:54.254740   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.254750   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:54.254759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:54.254820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:54.292053   79869 cri.go:89] found id: ""
	I0829 19:37:54.292078   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.292086   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:54.292092   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:54.292153   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:54.330528   79869 cri.go:89] found id: ""
	I0829 19:37:54.330561   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.330573   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:54.330580   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:54.330653   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:54.363571   79869 cri.go:89] found id: ""
	I0829 19:37:54.363594   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.363602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:54.363608   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:54.363669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:54.395112   79869 cri.go:89] found id: ""
	I0829 19:37:54.395144   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.395166   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:54.395178   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:54.395192   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:54.408701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:54.408729   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:54.474198   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:54.474218   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:54.474231   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:54.555430   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:54.555467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.592858   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:54.592893   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.144165   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:57.157368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:57.157437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:57.194662   79869 cri.go:89] found id: ""
	I0829 19:37:57.194693   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.194706   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:57.194721   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:57.194784   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:57.226822   79869 cri.go:89] found id: ""
	I0829 19:37:57.226848   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.226856   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:57.226862   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:57.226910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:57.263892   79869 cri.go:89] found id: ""
	I0829 19:37:57.263932   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.263945   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:57.263955   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:57.264018   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:57.301202   79869 cri.go:89] found id: ""
	I0829 19:37:57.301243   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.301255   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:57.301261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:57.301317   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:57.335291   79869 cri.go:89] found id: ""
	I0829 19:37:57.335321   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.335337   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:57.335343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:57.335392   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:57.368961   79869 cri.go:89] found id: ""
	I0829 19:37:57.368983   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.368992   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:57.368997   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:57.369042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:57.401813   79869 cri.go:89] found id: ""
	I0829 19:37:57.401837   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.401844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:57.401850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:57.401906   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:57.434719   79869 cri.go:89] found id: ""
	I0829 19:37:57.434745   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.434756   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:57.434765   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:57.434777   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.484182   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:57.484217   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:57.497025   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:57.497051   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:57.569752   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:57.569776   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:57.569789   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:57.651276   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:57.651324   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:00.189981   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:00.204723   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:00.204794   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:00.241677   79869 cri.go:89] found id: ""
	I0829 19:38:00.241700   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.241707   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:00.241713   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:00.241768   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:00.278692   79869 cri.go:89] found id: ""
	I0829 19:38:00.278726   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.278736   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:00.278744   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:00.278801   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:00.310418   79869 cri.go:89] found id: ""
	I0829 19:38:00.310448   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.310459   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:00.310466   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:00.310528   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:00.348423   79869 cri.go:89] found id: ""
	I0829 19:38:00.348446   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.348453   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:00.348459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:00.348511   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:00.380954   79869 cri.go:89] found id: ""
	I0829 19:38:00.380978   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.380985   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:00.380991   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:00.381043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:00.414783   79869 cri.go:89] found id: ""
	I0829 19:38:00.414812   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.414823   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:00.414831   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:00.414895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:00.450606   79869 cri.go:89] found id: ""
	I0829 19:38:00.450634   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.450642   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:00.450647   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:00.450696   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:00.485337   79869 cri.go:89] found id: ""
	I0829 19:38:00.485360   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.485375   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:00.485382   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:00.485399   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:00.551481   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:00.551502   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:00.551513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:00.630781   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:00.630819   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:00.676339   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:00.676363   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:00.728420   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:00.728452   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.243268   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:03.256259   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:03.256359   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:03.291103   79869 cri.go:89] found id: ""
	I0829 19:38:03.291131   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.291138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:03.291144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:03.291190   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:03.327866   79869 cri.go:89] found id: ""
	I0829 19:38:03.327898   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.327909   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:03.327917   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:03.327986   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:03.359082   79869 cri.go:89] found id: ""
	I0829 19:38:03.359110   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.359121   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:03.359129   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:03.359183   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:03.392714   79869 cri.go:89] found id: ""
	I0829 19:38:03.392741   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.392751   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:03.392758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:03.392823   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:03.427785   79869 cri.go:89] found id: ""
	I0829 19:38:03.427812   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.427820   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:03.427827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:03.427888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:03.463136   79869 cri.go:89] found id: ""
	I0829 19:38:03.463161   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.463171   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:03.463177   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:03.463230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:03.496188   79869 cri.go:89] found id: ""
	I0829 19:38:03.496225   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.496237   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:03.496244   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:03.496295   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:03.529566   79869 cri.go:89] found id: ""
	I0829 19:38:03.529591   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.529600   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:03.529609   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:03.529619   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:03.584787   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:03.584828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.599464   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:03.599509   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:03.676743   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:03.676763   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:03.676774   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:03.757552   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:03.757605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.297887   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:06.311413   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:06.311498   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:06.345494   79869 cri.go:89] found id: ""
	I0829 19:38:06.345529   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.345539   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:06.345546   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:06.345605   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:06.377646   79869 cri.go:89] found id: ""
	I0829 19:38:06.377680   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.377691   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:06.377698   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:06.377809   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:06.416770   79869 cri.go:89] found id: ""
	I0829 19:38:06.416799   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.416810   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:06.416817   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:06.416869   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:06.451995   79869 cri.go:89] found id: ""
	I0829 19:38:06.452024   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.452034   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:06.452040   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:06.452095   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:06.484604   79869 cri.go:89] found id: ""
	I0829 19:38:06.484631   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.484642   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:06.484650   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:06.484713   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:06.517955   79869 cri.go:89] found id: ""
	I0829 19:38:06.517981   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.517988   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:06.517994   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:06.518053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:06.551069   79869 cri.go:89] found id: ""
	I0829 19:38:06.551100   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.551111   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:06.551118   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:06.551178   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:06.585340   79869 cri.go:89] found id: ""
	I0829 19:38:06.585367   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.585379   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:06.585389   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:06.585416   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:06.637942   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:06.637977   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:06.652097   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:06.652124   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:06.738226   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:06.738252   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:06.738268   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:06.817478   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:06.817519   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:09.360441   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:09.373372   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:09.373431   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:09.409942   79869 cri.go:89] found id: ""
	I0829 19:38:09.409970   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.409981   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:09.409989   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:09.410050   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:09.444611   79869 cri.go:89] found id: ""
	I0829 19:38:09.444639   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.444647   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:09.444652   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:09.444701   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:09.478206   79869 cri.go:89] found id: ""
	I0829 19:38:09.478233   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.478240   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:09.478246   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:09.478305   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:09.510313   79869 cri.go:89] found id: ""
	I0829 19:38:09.510340   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.510356   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:09.510361   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:09.510419   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:09.545380   79869 cri.go:89] found id: ""
	I0829 19:38:09.545412   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.545422   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:09.545429   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:09.545495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:09.578560   79869 cri.go:89] found id: ""
	I0829 19:38:09.578591   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.578600   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:09.578606   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:09.578659   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:09.613445   79869 cri.go:89] found id: ""
	I0829 19:38:09.613476   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.613484   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:09.613490   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:09.613540   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:09.649933   79869 cri.go:89] found id: ""
	I0829 19:38:09.649961   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.649970   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:09.649981   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:09.649998   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:09.662471   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:09.662496   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:09.728562   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:09.728594   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:09.728610   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:09.813152   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:09.813187   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:09.852846   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:09.852879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.403437   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:12.429787   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:12.429872   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:12.470833   79869 cri.go:89] found id: ""
	I0829 19:38:12.470858   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.470866   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:12.470871   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:12.470947   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:12.502307   79869 cri.go:89] found id: ""
	I0829 19:38:12.502334   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.502343   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:12.502351   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:12.502411   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:12.535084   79869 cri.go:89] found id: ""
	I0829 19:38:12.535108   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.535114   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:12.535120   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:12.535182   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:12.571735   79869 cri.go:89] found id: ""
	I0829 19:38:12.571762   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.571772   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:12.571779   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:12.571838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:12.604987   79869 cri.go:89] found id: ""
	I0829 19:38:12.605020   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.605029   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:12.605036   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:12.605093   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:12.639257   79869 cri.go:89] found id: ""
	I0829 19:38:12.639281   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.639289   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:12.639300   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:12.639362   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:12.674790   79869 cri.go:89] found id: ""
	I0829 19:38:12.674811   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.674818   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:12.674824   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:12.674877   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:12.711132   79869 cri.go:89] found id: ""
	I0829 19:38:12.711156   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.711164   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:12.711172   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:12.711184   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.763916   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:12.763950   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:12.777071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:12.777100   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:12.844974   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:12.845002   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:12.845017   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:12.924646   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:12.924682   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:15.465319   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:15.478237   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:15.478315   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:15.510066   79869 cri.go:89] found id: ""
	I0829 19:38:15.510113   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.510124   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:15.510132   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:15.510180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:15.543094   79869 cri.go:89] found id: ""
	I0829 19:38:15.543117   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.543125   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:15.543138   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:15.543189   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:15.577253   79869 cri.go:89] found id: ""
	I0829 19:38:15.577279   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.577286   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:15.577292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:15.577352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:15.612073   79869 cri.go:89] found id: ""
	I0829 19:38:15.612107   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.612119   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:15.612128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:15.612196   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:15.645565   79869 cri.go:89] found id: ""
	I0829 19:38:15.645587   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.645595   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:15.645601   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:15.645646   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:15.679991   79869 cri.go:89] found id: ""
	I0829 19:38:15.680018   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.680027   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:15.680033   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:15.680109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:15.713899   79869 cri.go:89] found id: ""
	I0829 19:38:15.713923   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.713931   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:15.713937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:15.713991   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:15.750559   79869 cri.go:89] found id: ""
	I0829 19:38:15.750590   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.750601   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:15.750613   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:15.750628   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:15.762918   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:15.762943   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:15.832171   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:15.832195   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:15.832211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:15.913268   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:15.913311   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:15.951909   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:15.951935   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:18.501587   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:18.514136   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:18.514198   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:18.546937   79869 cri.go:89] found id: ""
	I0829 19:38:18.546977   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.546986   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:18.546994   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:18.547059   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:18.579227   79869 cri.go:89] found id: ""
	I0829 19:38:18.579256   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.579267   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:18.579275   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:18.579350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:18.610639   79869 cri.go:89] found id: ""
	I0829 19:38:18.610665   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.610673   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:18.610678   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:18.610739   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:18.642646   79869 cri.go:89] found id: ""
	I0829 19:38:18.642672   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.642680   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:18.642689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:18.642744   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:18.678244   79869 cri.go:89] found id: ""
	I0829 19:38:18.678264   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.678271   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:18.678277   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:18.678341   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:18.709787   79869 cri.go:89] found id: ""
	I0829 19:38:18.709812   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.709820   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:18.709826   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:18.709876   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:18.743570   79869 cri.go:89] found id: ""
	I0829 19:38:18.743593   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.743602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:18.743610   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:18.743671   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:18.776790   79869 cri.go:89] found id: ""
	I0829 19:38:18.776815   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.776823   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:18.776831   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:18.776842   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:18.791736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:18.791765   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:18.880815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:18.880835   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:18.880849   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:18.969263   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:18.969304   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:19.005813   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:19.005843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.559810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:21.572617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:21.572682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:21.606221   79869 cri.go:89] found id: ""
	I0829 19:38:21.606245   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.606253   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:21.606259   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:21.606310   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:21.637794   79869 cri.go:89] found id: ""
	I0829 19:38:21.637822   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.637830   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:21.637835   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:21.637888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:21.671484   79869 cri.go:89] found id: ""
	I0829 19:38:21.671505   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.671515   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:21.671521   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:21.671576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:21.707212   79869 cri.go:89] found id: ""
	I0829 19:38:21.707240   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.707250   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:21.707257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:21.707320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:21.742944   79869 cri.go:89] found id: ""
	I0829 19:38:21.742964   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.742971   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:21.742977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:21.743023   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:21.779919   79869 cri.go:89] found id: ""
	I0829 19:38:21.779940   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.779947   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:21.779952   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:21.780007   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:21.819817   79869 cri.go:89] found id: ""
	I0829 19:38:21.819848   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.819858   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:21.819866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:21.819926   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:21.853791   79869 cri.go:89] found id: ""
	I0829 19:38:21.853817   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.853825   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:21.853833   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:21.853843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:21.890519   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:21.890550   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.943940   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:21.943972   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:21.956697   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:21.956724   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:22.030470   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:22.030495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:22.030513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:24.608719   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:24.624343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:24.624403   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:24.679480   79869 cri.go:89] found id: ""
	I0829 19:38:24.679507   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.679514   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:24.679520   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:24.679589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:24.714065   79869 cri.go:89] found id: ""
	I0829 19:38:24.714114   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.714127   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:24.714134   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:24.714194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:24.751382   79869 cri.go:89] found id: ""
	I0829 19:38:24.751408   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.751417   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:24.751422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:24.751481   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:24.783549   79869 cri.go:89] found id: ""
	I0829 19:38:24.783573   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.783580   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:24.783588   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:24.783643   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:24.815500   79869 cri.go:89] found id: ""
	I0829 19:38:24.815524   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.815532   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:24.815539   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:24.815594   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:24.848225   79869 cri.go:89] found id: ""
	I0829 19:38:24.848249   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.848258   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:24.848264   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:24.848321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:24.880473   79869 cri.go:89] found id: ""
	I0829 19:38:24.880500   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.880511   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:24.880520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:24.880587   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:24.912907   79869 cri.go:89] found id: ""
	I0829 19:38:24.912941   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.912959   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:24.912967   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:24.912996   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:24.985389   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:24.985420   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:24.985437   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:25.060555   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:25.060591   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:25.099073   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:25.099099   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:25.149434   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:25.149473   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:27.664027   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:27.677971   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:27.678042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:27.715124   79869 cri.go:89] found id: ""
	I0829 19:38:27.715166   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.715179   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:27.715188   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:27.715255   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:27.748316   79869 cri.go:89] found id: ""
	I0829 19:38:27.748348   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.748361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:27.748370   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:27.748439   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:27.782075   79869 cri.go:89] found id: ""
	I0829 19:38:27.782116   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.782128   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:27.782137   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:27.782194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:27.821517   79869 cri.go:89] found id: ""
	I0829 19:38:27.821545   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.821554   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:27.821562   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:27.821621   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:27.853619   79869 cri.go:89] found id: ""
	I0829 19:38:27.853643   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.853654   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:27.853668   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:27.853723   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:27.886790   79869 cri.go:89] found id: ""
	I0829 19:38:27.886814   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.886822   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:27.886828   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:27.886883   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:27.920756   79869 cri.go:89] found id: ""
	I0829 19:38:27.920779   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.920789   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:27.920802   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:27.920861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:27.959241   79869 cri.go:89] found id: ""
	I0829 19:38:27.959267   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.959279   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:27.959289   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:27.959302   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:27.999922   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:27.999945   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:28.050616   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:28.050655   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:28.066437   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:28.066470   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:28.137427   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:28.137451   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:28.137466   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:30.721890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:30.736387   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:30.736462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:30.773230   79869 cri.go:89] found id: ""
	I0829 19:38:30.773290   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.773304   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:30.773315   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:30.773382   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:30.806234   79869 cri.go:89] found id: ""
	I0829 19:38:30.806261   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.806271   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:30.806279   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:30.806344   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:30.841608   79869 cri.go:89] found id: ""
	I0829 19:38:30.841650   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.841674   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:30.841684   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:30.841751   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:30.875926   79869 cri.go:89] found id: ""
	I0829 19:38:30.875952   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.875960   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:30.875966   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:30.876020   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:30.914312   79869 cri.go:89] found id: ""
	I0829 19:38:30.914334   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.914341   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:30.914347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:30.914406   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:30.948819   79869 cri.go:89] found id: ""
	I0829 19:38:30.948854   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.948865   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:30.948876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:30.948937   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:30.980573   79869 cri.go:89] found id: ""
	I0829 19:38:30.980606   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.980617   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:30.980627   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:30.980688   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:31.012024   79869 cri.go:89] found id: ""
	I0829 19:38:31.012052   79869 logs.go:276] 0 containers: []
	W0829 19:38:31.012061   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:31.012071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:31.012089   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:31.076870   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:31.076896   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:31.076907   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:31.156257   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:31.156293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:31.192883   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:31.192911   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:31.246303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:31.246342   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:33.760372   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:33.773924   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:33.773998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:33.810019   79869 cri.go:89] found id: ""
	I0829 19:38:33.810047   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.810057   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:33.810064   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:33.810146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:33.848706   79869 cri.go:89] found id: ""
	I0829 19:38:33.848735   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.848747   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:33.848754   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:33.848822   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:33.880689   79869 cri.go:89] found id: ""
	I0829 19:38:33.880718   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.880731   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:33.880739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:33.880803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:33.911962   79869 cri.go:89] found id: ""
	I0829 19:38:33.911990   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.912000   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:33.912008   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:33.912071   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:33.948432   79869 cri.go:89] found id: ""
	I0829 19:38:33.948457   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.948468   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:33.948474   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:33.948534   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:33.981818   79869 cri.go:89] found id: ""
	I0829 19:38:33.981851   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.981859   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:33.981866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:33.981923   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:34.022072   79869 cri.go:89] found id: ""
	I0829 19:38:34.022108   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.022118   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:34.022125   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:34.022185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:34.055881   79869 cri.go:89] found id: ""
	I0829 19:38:34.055909   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.055920   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:34.055930   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:34.055944   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:34.133046   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:34.133079   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:34.175426   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:34.175457   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:34.228789   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:34.228825   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:34.243272   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:34.243322   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:34.318761   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:36.819665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:36.832516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:36.832604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:36.866781   79869 cri.go:89] found id: ""
	I0829 19:38:36.866815   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.866826   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:36.866833   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:36.866895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:36.903289   79869 cri.go:89] found id: ""
	I0829 19:38:36.903319   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.903329   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:36.903335   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:36.903383   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:36.936691   79869 cri.go:89] found id: ""
	I0829 19:38:36.936714   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.936722   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:36.936727   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:36.936776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:36.969496   79869 cri.go:89] found id: ""
	I0829 19:38:36.969525   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.969535   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:36.969541   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:36.969589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:37.001683   79869 cri.go:89] found id: ""
	I0829 19:38:37.001707   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.001715   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:37.001720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:37.001765   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:37.041189   79869 cri.go:89] found id: ""
	I0829 19:38:37.041212   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.041223   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:37.041231   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:37.041281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:37.077041   79869 cri.go:89] found id: ""
	I0829 19:38:37.077067   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.077075   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:37.077080   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:37.077135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:37.110478   79869 cri.go:89] found id: ""
	I0829 19:38:37.110506   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.110514   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:37.110523   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:37.110535   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:37.162560   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:37.162598   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:37.176466   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:37.176491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:37.244843   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:37.244861   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:37.244874   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:37.323324   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:37.323362   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:39.864755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:39.877730   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:39.877789   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:39.909828   79869 cri.go:89] found id: ""
	I0829 19:38:39.909864   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.909874   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:39.909880   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:39.909941   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:39.943492   79869 cri.go:89] found id: ""
	I0829 19:38:39.943513   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.943521   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:39.943528   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:39.943586   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:39.976346   79869 cri.go:89] found id: ""
	I0829 19:38:39.976382   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.976393   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:39.976401   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:39.976455   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:40.008764   79869 cri.go:89] found id: ""
	I0829 19:38:40.008793   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.008803   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:40.008810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:40.008871   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:40.040324   79869 cri.go:89] found id: ""
	I0829 19:38:40.040356   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.040373   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:40.040381   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:40.040448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:40.072836   79869 cri.go:89] found id: ""
	I0829 19:38:40.072867   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.072880   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:40.072888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:40.072938   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:40.105437   79869 cri.go:89] found id: ""
	I0829 19:38:40.105462   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.105470   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:40.105476   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:40.105520   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:40.139447   79869 cri.go:89] found id: ""
	I0829 19:38:40.139480   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.139491   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:40.139502   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:40.139517   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.177799   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:40.177828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:40.227087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:40.227118   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:40.241116   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:40.241139   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:40.305556   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:40.305576   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:40.305590   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:42.886493   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:42.900941   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:42.901013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:42.938904   79869 cri.go:89] found id: ""
	I0829 19:38:42.938925   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.938933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:42.938946   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:42.939012   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:42.975186   79869 cri.go:89] found id: ""
	I0829 19:38:42.975213   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.975221   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:42.975227   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:42.975288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:43.009115   79869 cri.go:89] found id: ""
	I0829 19:38:43.009144   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.009152   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:43.009157   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:43.009207   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:43.044948   79869 cri.go:89] found id: ""
	I0829 19:38:43.044977   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.044987   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:43.044995   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:43.045057   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:43.079699   79869 cri.go:89] found id: ""
	I0829 19:38:43.079725   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.079732   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:43.079739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:43.079804   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:43.113742   79869 cri.go:89] found id: ""
	I0829 19:38:43.113770   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.113780   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:43.113788   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:43.113850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:43.151852   79869 cri.go:89] found id: ""
	I0829 19:38:43.151876   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.151884   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:43.151889   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:43.151939   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:43.190832   79869 cri.go:89] found id: ""
	I0829 19:38:43.190854   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.190862   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:43.190869   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:43.190882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:43.242651   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:43.242683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:43.256378   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:43.256403   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:43.333657   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:43.333684   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:43.333696   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:43.409811   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:43.409850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:45.947709   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:45.960937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:45.961013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:45.993198   79869 cri.go:89] found id: ""
	I0829 19:38:45.993230   79869 logs.go:276] 0 containers: []
	W0829 19:38:45.993242   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:45.993249   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:45.993303   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:46.031110   79869 cri.go:89] found id: ""
	I0829 19:38:46.031137   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.031148   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:46.031157   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:46.031212   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:46.065062   79869 cri.go:89] found id: ""
	I0829 19:38:46.065085   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.065093   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:46.065099   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:46.065155   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:46.099092   79869 cri.go:89] found id: ""
	I0829 19:38:46.099114   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.099122   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:46.099128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:46.099177   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:46.132426   79869 cri.go:89] found id: ""
	I0829 19:38:46.132450   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.132459   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:46.132464   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:46.132517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:46.165289   79869 cri.go:89] found id: ""
	I0829 19:38:46.165320   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.165337   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:46.165346   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:46.165415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:46.198761   79869 cri.go:89] found id: ""
	I0829 19:38:46.198786   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.198793   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:46.198799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:46.198859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:46.230621   79869 cri.go:89] found id: ""
	I0829 19:38:46.230649   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.230659   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:46.230669   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:46.230683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:46.280364   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:46.280398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:46.292854   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:46.292878   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:46.358673   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:46.358694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:46.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:46.439653   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:46.439688   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:48.975568   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:48.988793   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:48.988857   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:49.023697   79869 cri.go:89] found id: ""
	I0829 19:38:49.023721   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.023730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:49.023736   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:49.023791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:49.060131   79869 cri.go:89] found id: ""
	I0829 19:38:49.060153   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.060160   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:49.060166   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:49.060222   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:49.096069   79869 cri.go:89] found id: ""
	I0829 19:38:49.096101   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.096112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:49.096119   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:49.096185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:49.130316   79869 cri.go:89] found id: ""
	I0829 19:38:49.130347   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.130359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:49.130367   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:49.130434   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:49.162853   79869 cri.go:89] found id: ""
	I0829 19:38:49.162877   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.162890   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:49.162896   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:49.162956   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:49.198555   79869 cri.go:89] found id: ""
	I0829 19:38:49.198581   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.198592   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:49.198598   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:49.198663   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:49.232521   79869 cri.go:89] found id: ""
	I0829 19:38:49.232550   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.232560   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:49.232568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:49.232626   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:49.268094   79869 cri.go:89] found id: ""
	I0829 19:38:49.268124   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.268134   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:49.268145   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:49.268161   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:49.320884   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:49.320918   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:49.334244   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:49.334273   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:49.404442   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.404464   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:49.404479   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:49.482413   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:49.482451   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.021406   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:52.035517   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:52.035600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:52.068868   79869 cri.go:89] found id: ""
	I0829 19:38:52.068902   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.068909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:52.068915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:52.068971   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:52.100503   79869 cri.go:89] found id: ""
	I0829 19:38:52.100533   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.100542   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:52.100548   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:52.100620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:52.135148   79869 cri.go:89] found id: ""
	I0829 19:38:52.135189   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.135201   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:52.135208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:52.135276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:52.174469   79869 cri.go:89] found id: ""
	I0829 19:38:52.174498   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.174508   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:52.174516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:52.174576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:52.206485   79869 cri.go:89] found id: ""
	I0829 19:38:52.206508   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.206515   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:52.206520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:52.206568   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:52.240053   79869 cri.go:89] found id: ""
	I0829 19:38:52.240073   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.240080   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:52.240085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:52.240143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:52.274473   79869 cri.go:89] found id: ""
	I0829 19:38:52.274497   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.274506   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:52.274513   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:52.274576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:52.306646   79869 cri.go:89] found id: ""
	I0829 19:38:52.306669   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.306678   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:52.306686   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:52.306698   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:52.383558   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:52.383615   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.421958   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:52.421988   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:52.478024   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:52.478059   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:52.490736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:52.490772   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:52.555670   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:55.056273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:55.068074   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:55.068147   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:55.102268   79869 cri.go:89] found id: ""
	I0829 19:38:55.102298   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.102309   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:55.102317   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:55.102368   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:55.133730   79869 cri.go:89] found id: ""
	I0829 19:38:55.133763   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.133773   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:55.133784   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:55.133848   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:55.168902   79869 cri.go:89] found id: ""
	I0829 19:38:55.168932   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.168942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:55.168949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:55.169015   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:55.206190   79869 cri.go:89] found id: ""
	I0829 19:38:55.206220   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.206231   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:55.206241   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:55.206326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:55.240178   79869 cri.go:89] found id: ""
	I0829 19:38:55.240213   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.240224   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:55.240233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:55.240313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:55.272532   79869 cri.go:89] found id: ""
	I0829 19:38:55.272559   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.272569   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:55.272575   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:55.272636   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:55.305427   79869 cri.go:89] found id: ""
	I0829 19:38:55.305457   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.305467   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:55.305473   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:55.305522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:55.337444   79869 cri.go:89] found id: ""
	I0829 19:38:55.337477   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.337489   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:55.337502   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:55.337518   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:55.402988   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:55.403019   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:55.403034   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:55.479168   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:55.479202   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:55.516345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:55.516372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:55.566716   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:55.566749   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.080261   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:58.093884   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:58.093944   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:58.126772   79869 cri.go:89] found id: ""
	I0829 19:38:58.126799   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.126808   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:58.126814   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:58.126861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:58.158344   79869 cri.go:89] found id: ""
	I0829 19:38:58.158373   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.158385   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:58.158393   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:58.158458   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:58.191524   79869 cri.go:89] found id: ""
	I0829 19:38:58.191550   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.191561   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:58.191569   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:58.191635   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:58.223336   79869 cri.go:89] found id: ""
	I0829 19:38:58.223362   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.223370   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:58.223375   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:58.223433   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:58.256223   79869 cri.go:89] found id: ""
	I0829 19:38:58.256248   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.256256   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:58.256262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:58.256321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:58.290008   79869 cri.go:89] found id: ""
	I0829 19:38:58.290035   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.290044   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:58.290049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:58.290112   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:58.324441   79869 cri.go:89] found id: ""
	I0829 19:38:58.324471   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.324488   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:58.324495   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:58.324554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:58.357324   79869 cri.go:89] found id: ""
	I0829 19:38:58.357351   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.357361   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:58.357378   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:58.357394   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.370251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:58.370277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:58.461098   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:58.461123   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:58.461138   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:58.537222   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:58.537255   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:58.574012   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:58.574043   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:01.125646   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:01.138389   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:01.138464   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:01.172278   79869 cri.go:89] found id: ""
	I0829 19:39:01.172305   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.172313   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:01.172319   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:01.172375   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:01.207408   79869 cri.go:89] found id: ""
	I0829 19:39:01.207444   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.207455   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:01.207462   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:01.207522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:01.242683   79869 cri.go:89] found id: ""
	I0829 19:39:01.242711   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.242721   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:01.242729   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:01.242788   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:01.275683   79869 cri.go:89] found id: ""
	I0829 19:39:01.275714   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.275730   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:01.275738   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:01.275803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:01.308039   79869 cri.go:89] found id: ""
	I0829 19:39:01.308063   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.308071   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:01.308078   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:01.308137   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:01.344382   79869 cri.go:89] found id: ""
	I0829 19:39:01.344406   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.344413   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:01.344418   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:01.344466   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:01.379942   79869 cri.go:89] found id: ""
	I0829 19:39:01.379964   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.379972   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:01.379977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:01.380021   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:01.414955   79869 cri.go:89] found id: ""
	I0829 19:39:01.414981   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.414989   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:01.414997   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:01.415008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:01.469174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:01.469206   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:01.482719   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:01.482743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:01.546713   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:01.546731   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:01.546742   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:01.630655   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:01.630689   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:04.167940   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:04.180881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:04.180948   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:04.214782   79869 cri.go:89] found id: ""
	I0829 19:39:04.214809   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.214818   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:04.214824   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:04.214878   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:04.248274   79869 cri.go:89] found id: ""
	I0829 19:39:04.248300   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.248309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:04.248316   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:04.248378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:04.280622   79869 cri.go:89] found id: ""
	I0829 19:39:04.280648   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.280657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:04.280681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:04.280749   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:04.313715   79869 cri.go:89] found id: ""
	I0829 19:39:04.313746   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.313754   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:04.313759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:04.313806   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:04.345179   79869 cri.go:89] found id: ""
	I0829 19:39:04.345201   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.345209   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:04.345214   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:04.345264   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:04.377264   79869 cri.go:89] found id: ""
	I0829 19:39:04.377294   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.377304   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:04.377315   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:04.377378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:04.410005   79869 cri.go:89] found id: ""
	I0829 19:39:04.410028   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.410034   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:04.410039   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:04.410109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:04.444345   79869 cri.go:89] found id: ""
	I0829 19:39:04.444373   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.444383   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:04.444393   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:04.444409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:04.488071   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:04.488103   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:04.539394   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:04.539427   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:04.552285   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:04.552320   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:04.620973   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:04.620992   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:04.621006   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.201149   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:07.213392   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:07.213452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:07.249778   79869 cri.go:89] found id: ""
	I0829 19:39:07.249801   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.249812   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:07.249817   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:07.249864   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:07.282763   79869 cri.go:89] found id: ""
	I0829 19:39:07.282792   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.282799   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:07.282805   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:07.282852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:07.316882   79869 cri.go:89] found id: ""
	I0829 19:39:07.316920   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.316932   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:07.316940   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:07.316990   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:07.348474   79869 cri.go:89] found id: ""
	I0829 19:39:07.348505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.348516   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:07.348532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:07.348606   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:07.381442   79869 cri.go:89] found id: ""
	I0829 19:39:07.381467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.381474   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:07.381479   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:07.381535   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:07.414935   79869 cri.go:89] found id: ""
	I0829 19:39:07.414968   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.414981   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:07.414990   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:07.415053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:07.448427   79869 cri.go:89] found id: ""
	I0829 19:39:07.448467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.448479   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:07.448486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:07.448544   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:07.480475   79869 cri.go:89] found id: ""
	I0829 19:39:07.480505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.480515   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:07.480526   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:07.480540   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:07.532732   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:07.532766   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:07.546366   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:07.546411   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:07.615661   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:07.615679   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:07.615690   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.696874   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:07.696909   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:10.236118   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:10.249347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:10.249413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:10.280412   79869 cri.go:89] found id: ""
	I0829 19:39:10.280436   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.280446   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:10.280451   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:10.280499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:10.313091   79869 cri.go:89] found id: ""
	I0829 19:39:10.313119   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.313126   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:10.313132   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:10.313187   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:10.347208   79869 cri.go:89] found id: ""
	I0829 19:39:10.347243   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.347252   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:10.347257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:10.347306   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:10.380658   79869 cri.go:89] found id: ""
	I0829 19:39:10.380686   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.380696   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:10.380703   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:10.380750   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:10.412573   79869 cri.go:89] found id: ""
	I0829 19:39:10.412601   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.412613   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:10.412621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:10.412682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:10.449655   79869 cri.go:89] found id: ""
	I0829 19:39:10.449683   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.449691   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:10.449698   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:10.449759   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:10.485157   79869 cri.go:89] found id: ""
	I0829 19:39:10.485184   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.485195   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:10.485203   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:10.485262   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:10.522628   79869 cri.go:89] found id: ""
	I0829 19:39:10.522656   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.522666   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:10.522673   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:10.522684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:10.541079   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:10.541114   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:10.633462   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:10.633495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:10.633512   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:10.714315   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:10.714354   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:10.751345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:10.751371   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.306786   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:13.319368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:13.319447   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:13.353999   79869 cri.go:89] found id: ""
	I0829 19:39:13.354029   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.354039   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:13.354047   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:13.354124   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:13.386953   79869 cri.go:89] found id: ""
	I0829 19:39:13.386982   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.386992   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:13.387000   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:13.387053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:13.425835   79869 cri.go:89] found id: ""
	I0829 19:39:13.425860   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.425869   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:13.425876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:13.425942   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:13.462808   79869 cri.go:89] found id: ""
	I0829 19:39:13.462835   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.462843   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:13.462849   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:13.462895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:13.495194   79869 cri.go:89] found id: ""
	I0829 19:39:13.495228   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.495240   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:13.495248   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:13.495309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:13.527239   79869 cri.go:89] found id: ""
	I0829 19:39:13.527268   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.527277   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:13.527283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:13.527357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:13.559081   79869 cri.go:89] found id: ""
	I0829 19:39:13.559110   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.559121   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:13.559128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:13.559191   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:13.590723   79869 cri.go:89] found id: ""
	I0829 19:39:13.590748   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.590757   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:13.590767   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:13.590781   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.645718   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:13.645751   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:13.659224   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:13.659250   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:13.733532   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:13.733566   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:13.733580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:13.813639   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:13.813680   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.355269   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:16.377328   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:16.377395   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:16.437904   79869 cri.go:89] found id: ""
	I0829 19:39:16.437926   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.437933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:16.437939   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:16.437987   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:16.470254   79869 cri.go:89] found id: ""
	I0829 19:39:16.470279   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.470287   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:16.470293   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:16.470353   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:16.502125   79869 cri.go:89] found id: ""
	I0829 19:39:16.502165   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.502177   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:16.502186   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:16.502242   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:16.539754   79869 cri.go:89] found id: ""
	I0829 19:39:16.539781   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.539791   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:16.539799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:16.539862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:16.576191   79869 cri.go:89] found id: ""
	I0829 19:39:16.576218   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.576229   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:16.576236   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:16.576292   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:16.610183   79869 cri.go:89] found id: ""
	I0829 19:39:16.610208   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.610219   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:16.610226   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:16.610285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:16.642568   79869 cri.go:89] found id: ""
	I0829 19:39:16.642605   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.642614   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:16.642624   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:16.642689   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:16.675990   79869 cri.go:89] found id: ""
	I0829 19:39:16.676017   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.676025   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:16.676033   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:16.676049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:16.739204   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:16.739222   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:16.739233   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:16.816427   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:16.816460   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.851816   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:16.851850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:16.903922   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:16.903958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:19.418163   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:19.432617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:19.432676   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:19.464691   79869 cri.go:89] found id: ""
	I0829 19:39:19.464718   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.464730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:19.464737   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:19.464793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:19.496265   79869 cri.go:89] found id: ""
	I0829 19:39:19.496291   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.496302   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:19.496310   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:19.496397   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:19.527395   79869 cri.go:89] found id: ""
	I0829 19:39:19.527422   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.527433   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:19.527440   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:19.527501   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:19.558377   79869 cri.go:89] found id: ""
	I0829 19:39:19.558404   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.558414   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:19.558422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:19.558484   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:19.589687   79869 cri.go:89] found id: ""
	I0829 19:39:19.589710   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.589718   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:19.589724   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:19.589813   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:19.624051   79869 cri.go:89] found id: ""
	I0829 19:39:19.624077   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.624086   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:19.624097   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:19.624143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:19.656248   79869 cri.go:89] found id: ""
	I0829 19:39:19.656282   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.656293   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:19.656301   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:19.656364   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:19.689299   79869 cri.go:89] found id: ""
	I0829 19:39:19.689328   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.689338   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:19.689349   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:19.689365   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:19.739952   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:19.739982   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:19.753169   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:19.753197   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:19.816948   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:19.816971   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:19.816983   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:19.892233   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:19.892270   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.432456   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:22.444842   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:22.444915   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:22.475864   79869 cri.go:89] found id: ""
	I0829 19:39:22.475888   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.475899   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:22.475907   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:22.475954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:22.506824   79869 cri.go:89] found id: ""
	I0829 19:39:22.506851   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.506858   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:22.506864   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:22.506909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:22.544960   79869 cri.go:89] found id: ""
	I0829 19:39:22.544984   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.545002   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:22.545009   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:22.545074   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:22.584077   79869 cri.go:89] found id: ""
	I0829 19:39:22.584098   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.584106   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:22.584114   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:22.584169   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:22.621180   79869 cri.go:89] found id: ""
	I0829 19:39:22.621208   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.621220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:22.621228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:22.621288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:22.658111   79869 cri.go:89] found id: ""
	I0829 19:39:22.658139   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.658151   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:22.658158   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:22.658220   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:22.695654   79869 cri.go:89] found id: ""
	I0829 19:39:22.695679   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.695686   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:22.695692   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:22.695742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:22.733092   79869 cri.go:89] found id: ""
	I0829 19:39:22.733169   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.733184   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:22.733196   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:22.733212   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:22.808449   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:22.808469   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:22.808485   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:22.889239   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:22.889275   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.933487   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:22.933513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:22.983137   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:22.983178   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:25.496668   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:25.509508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:25.509572   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:25.544292   79869 cri.go:89] found id: ""
	I0829 19:39:25.544321   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.544334   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:25.544341   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:25.544400   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:25.576739   79869 cri.go:89] found id: ""
	I0829 19:39:25.576768   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.576779   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:25.576787   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:25.576840   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:25.608040   79869 cri.go:89] found id: ""
	I0829 19:39:25.608067   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.608075   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:25.608081   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:25.608127   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:25.639675   79869 cri.go:89] found id: ""
	I0829 19:39:25.639703   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.639712   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:25.639720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:25.639785   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:25.676966   79869 cri.go:89] found id: ""
	I0829 19:39:25.676995   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.677007   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:25.677014   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:25.677075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:25.712310   79869 cri.go:89] found id: ""
	I0829 19:39:25.712334   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.712341   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:25.712347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:25.712393   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:25.746172   79869 cri.go:89] found id: ""
	I0829 19:39:25.746196   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.746203   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:25.746208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:25.746257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:25.778476   79869 cri.go:89] found id: ""
	I0829 19:39:25.778497   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.778506   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:25.778514   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:25.778525   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:25.817791   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:25.817820   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:25.874597   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:25.874634   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:25.887469   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:25.887493   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:25.957308   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:25.957329   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:25.957348   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:28.536826   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:28.550981   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:28.551038   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:28.586607   79869 cri.go:89] found id: ""
	I0829 19:39:28.586636   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.586647   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:28.586656   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:28.586716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:28.627696   79869 cri.go:89] found id: ""
	I0829 19:39:28.627720   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.627728   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:28.627734   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:28.627793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:28.659877   79869 cri.go:89] found id: ""
	I0829 19:39:28.659906   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.659915   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:28.659920   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:28.659967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:28.694834   79869 cri.go:89] found id: ""
	I0829 19:39:28.694861   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.694868   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:28.694874   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:28.694934   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:28.728833   79869 cri.go:89] found id: ""
	I0829 19:39:28.728866   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.728878   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:28.728888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:28.728951   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:28.762236   79869 cri.go:89] found id: ""
	I0829 19:39:28.762269   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.762279   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:28.762286   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:28.762352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:28.794534   79869 cri.go:89] found id: ""
	I0829 19:39:28.794570   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.794583   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:28.794590   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:28.794660   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:28.827193   79869 cri.go:89] found id: ""
	I0829 19:39:28.827222   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.827233   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:28.827244   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:28.827260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:28.878905   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:28.878936   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:28.891795   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:28.891826   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:28.966249   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:28.966278   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:28.966294   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:29.044383   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:29.044417   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.582383   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:31.595250   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:31.595333   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:31.628763   79869 cri.go:89] found id: ""
	I0829 19:39:31.628791   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.628800   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:31.628805   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:31.628852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:31.663489   79869 cri.go:89] found id: ""
	I0829 19:39:31.663521   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.663531   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:31.663537   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:31.663598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:31.698248   79869 cri.go:89] found id: ""
	I0829 19:39:31.698275   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.698283   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:31.698289   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:31.698340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:31.732499   79869 cri.go:89] found id: ""
	I0829 19:39:31.732527   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.732536   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:31.732544   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:31.732601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:31.773831   79869 cri.go:89] found id: ""
	I0829 19:39:31.773853   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.773861   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:31.773866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:31.773909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:31.807713   79869 cri.go:89] found id: ""
	I0829 19:39:31.807739   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.807747   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:31.807753   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:31.807814   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:31.841846   79869 cri.go:89] found id: ""
	I0829 19:39:31.841874   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.841881   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:31.841887   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:31.841945   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:31.872713   79869 cri.go:89] found id: ""
	I0829 19:39:31.872736   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.872749   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:31.872760   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:31.872773   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:31.926299   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:31.926335   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:31.941134   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:31.941174   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:32.010600   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:32.010623   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:32.010638   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:32.091972   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:32.092008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:34.631695   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:34.644986   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:34.645051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:34.679788   79869 cri.go:89] found id: ""
	I0829 19:39:34.679816   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.679823   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:34.679832   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:34.679881   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:34.713113   79869 cri.go:89] found id: ""
	I0829 19:39:34.713139   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.713147   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:34.713152   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:34.713204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:34.745410   79869 cri.go:89] found id: ""
	I0829 19:39:34.745439   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.745451   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:34.745459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:34.745517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:34.779089   79869 cri.go:89] found id: ""
	I0829 19:39:34.779117   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.779125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:34.779132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:34.779179   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:34.810966   79869 cri.go:89] found id: ""
	I0829 19:39:34.810995   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.811004   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:34.811011   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:34.811075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:34.844859   79869 cri.go:89] found id: ""
	I0829 19:39:34.844894   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.844901   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:34.844907   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:34.844954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:34.876014   79869 cri.go:89] found id: ""
	I0829 19:39:34.876036   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.876044   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:34.876050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:34.876097   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:34.909383   79869 cri.go:89] found id: ""
	I0829 19:39:34.909412   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.909421   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:34.909429   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:34.909440   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:34.956841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:34.956875   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:34.969399   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:34.969423   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:35.034539   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:35.034574   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:35.034589   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:35.109702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:35.109743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:37.644897   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:37.658600   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:37.658665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:37.693604   79869 cri.go:89] found id: ""
	I0829 19:39:37.693638   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.693646   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:37.693655   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:37.693763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:37.727504   79869 cri.go:89] found id: ""
	I0829 19:39:37.727531   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.727538   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:37.727546   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:37.727598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:37.762755   79869 cri.go:89] found id: ""
	I0829 19:39:37.762778   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.762786   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:37.762792   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:37.762838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:37.799571   79869 cri.go:89] found id: ""
	I0829 19:39:37.799600   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.799611   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:37.799619   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:37.799669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:37.833599   79869 cri.go:89] found id: ""
	I0829 19:39:37.833632   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.833644   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:37.833651   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:37.833714   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:37.867877   79869 cri.go:89] found id: ""
	I0829 19:39:37.867901   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.867909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:37.867916   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:37.867968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:37.901439   79869 cri.go:89] found id: ""
	I0829 19:39:37.901467   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.901475   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:37.901480   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:37.901527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:37.936983   79869 cri.go:89] found id: ""
	I0829 19:39:37.937008   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.937016   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:37.937024   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:37.937035   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:38.016873   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:38.016917   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:38.052565   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:38.052605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:38.102174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:38.102210   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:38.115273   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:38.115298   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:38.186012   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:40.686558   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:40.699240   79869 kubeadm.go:597] duration metric: took 4m4.589527641s to restartPrimaryControlPlane
	W0829 19:39:40.699313   79869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:40.699343   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:44.450706   79869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.75133723s)
	I0829 19:39:44.450782   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:44.464692   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:44.473894   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:44.483464   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:44.483483   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:44.483524   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:44.492228   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:44.492277   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:44.501349   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:44.510241   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:44.510295   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:44.519210   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.528256   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:44.528314   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.537658   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:44.546976   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:44.547027   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:44.556823   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:44.630397   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:39:44.630474   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:44.771729   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:44.771869   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:44.772018   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:39:44.944512   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:44.947210   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:44.947320   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:44.947422   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:44.947540   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:44.947658   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:44.947781   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:44.947881   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:44.950819   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:44.950926   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:44.951022   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:44.951125   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:44.951174   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:44.951244   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:45.171698   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:45.287539   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:45.443576   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:45.594891   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:45.609143   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:45.610374   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:45.610440   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:45.746839   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:45.748753   79869 out.go:235]   - Booting up control plane ...
	I0829 19:39:45.748882   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:45.753577   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:45.754588   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:45.755463   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:45.760295   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:40:25.762989   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:40:25.763689   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:25.763863   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:30.764613   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:30.764821   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:40.765517   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:40.765752   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:00.767158   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:00.767429   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:40.770350   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:40.770652   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:40.770684   79869 kubeadm.go:310] 
	I0829 19:41:40.770740   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:41:40.770802   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:41:40.770818   79869 kubeadm.go:310] 
	I0829 19:41:40.770862   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:41:40.770917   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:41:40.771047   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:41:40.771057   79869 kubeadm.go:310] 
	I0829 19:41:40.771202   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:41:40.771254   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:41:40.771309   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:41:40.771320   79869 kubeadm.go:310] 
	I0829 19:41:40.771447   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:41:40.771565   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:41:40.771576   79869 kubeadm.go:310] 
	I0829 19:41:40.771675   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:41:40.771776   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:41:40.771900   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:41:40.771997   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:41:40.772010   79869 kubeadm.go:310] 
	I0829 19:41:40.772984   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:41:40.773093   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:41:40.773213   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 19:41:40.773353   79869 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 19:41:40.773398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:41:41.224263   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:41.239310   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:41:41.249121   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:41:41.249142   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:41:41.249195   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:41:41.258534   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:41:41.258591   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:41:41.267814   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:41:41.276813   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:41:41.276871   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:41:41.286937   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.296364   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:41:41.296435   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.306574   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:41:41.315824   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:41:41.315899   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:41:41.325290   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:41:41.389915   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:41:41.390071   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:41:41.529956   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:41:41.530108   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:41:41.530226   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:41:41.709310   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:41:41.711945   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:41:41.712051   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:41:41.712127   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:41:41.712225   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:41:41.712308   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:41:41.712402   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:41:41.712466   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:41:41.712551   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:41:41.712622   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:41:41.712727   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:41:41.712831   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:41:41.712865   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:41:41.712912   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:41:41.790778   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:41:41.993240   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:41:42.180389   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:41:42.248561   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:41:42.272297   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:41:42.273147   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:41:42.273249   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:41:42.421783   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:41:42.424669   79869 out.go:235]   - Booting up control plane ...
	I0829 19:41:42.424781   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:41:42.434145   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:41:42.437026   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:41:42.437823   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:41:42.441047   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:42:22.439545   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:42:22.439898   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:22.440093   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:27.439985   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:27.440226   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:37.440067   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:37.440333   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:57.439710   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:57.439891   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.439862   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:43:37.440057   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.440081   79869 kubeadm.go:310] 
	I0829 19:43:37.440118   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:43:37.440173   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:43:37.440181   79869 kubeadm.go:310] 
	I0829 19:43:37.440213   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:43:37.440265   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:43:37.440376   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:43:37.440384   79869 kubeadm.go:310] 
	I0829 19:43:37.440503   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:43:37.440551   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:43:37.440605   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:43:37.440618   79869 kubeadm.go:310] 
	I0829 19:43:37.440763   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:43:37.440893   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:43:37.440904   79869 kubeadm.go:310] 
	I0829 19:43:37.441013   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:43:37.441146   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:43:37.441255   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:43:37.441367   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:43:37.441380   79869 kubeadm.go:310] 
	I0829 19:43:37.441848   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:43:37.441958   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:43:37.442039   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 19:43:37.442126   79869 kubeadm.go:394] duration metric: took 8m1.388269811s to StartCluster
	I0829 19:43:37.442174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:43:37.442230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:43:37.483512   79869 cri.go:89] found id: ""
	I0829 19:43:37.483544   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.483554   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:43:37.483560   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:43:37.483617   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:43:37.518325   79869 cri.go:89] found id: ""
	I0829 19:43:37.518353   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.518361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:43:37.518368   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:43:37.518426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:43:37.554541   79869 cri.go:89] found id: ""
	I0829 19:43:37.554563   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.554574   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:43:37.554582   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:43:37.554650   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:43:37.589041   79869 cri.go:89] found id: ""
	I0829 19:43:37.589069   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.589076   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:43:37.589083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:43:37.589132   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:43:37.624451   79869 cri.go:89] found id: ""
	I0829 19:43:37.624479   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.624491   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:43:37.624499   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:43:37.624554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:43:37.660162   79869 cri.go:89] found id: ""
	I0829 19:43:37.660186   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.660193   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:43:37.660199   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:43:37.660249   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:43:37.696806   79869 cri.go:89] found id: ""
	I0829 19:43:37.696836   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.696844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:43:37.696850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:43:37.696898   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:43:37.732828   79869 cri.go:89] found id: ""
	I0829 19:43:37.732851   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.732860   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:43:37.732871   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:43:37.732887   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:43:37.772219   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:43:37.772247   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:43:37.823967   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:43:37.824003   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:43:37.838884   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:43:37.838906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:43:37.915184   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:43:37.915206   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:43:37.915222   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0829 19:43:38.020759   79869 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 19:43:38.020827   79869 out.go:270] * 
	* 
	W0829 19:43:38.020882   79869 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.020897   79869 out.go:270] * 
	* 
	W0829 19:43:38.021777   79869 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:43:38.024855   79869 out.go:201] 
	W0829 19:43:38.025860   79869 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.025905   79869 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 19:43:38.025936   79869 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 19:43:38.027175   79869 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-467349 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 2 (218.342945ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-467349 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-467349 logs -n 25: (1.498217699s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-633326 sudo cat                              | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo find                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo crio                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-633326                                       | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	| delete  | -p                                                     | disable-driver-mounts-831934 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | disable-driver-mounts-831934                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:28 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-690795             | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-920571            | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-672127  | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC | 29 Aug 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC |                     |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-690795                  | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC | 29 Aug 24 19:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-920571                 | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC | 29 Aug 24 19:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467349        | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-672127       | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:40 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467349             | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:31:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:31:58.737382   79869 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:31:58.737475   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737483   79869 out.go:358] Setting ErrFile to fd 2...
	I0829 19:31:58.737486   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737664   79869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:31:58.738216   79869 out.go:352] Setting JSON to false
	I0829 19:31:58.739096   79869 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8066,"bootTime":1724951853,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:31:58.739164   79869 start.go:139] virtualization: kvm guest
	I0829 19:31:58.741047   79869 out.go:177] * [old-k8s-version-467349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:31:58.742202   79869 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:31:58.742202   79869 notify.go:220] Checking for updates...
	I0829 19:31:58.744035   79869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:31:58.745212   79869 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:31:58.746330   79869 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:31:58.747599   79869 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:31:58.748625   79869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:31:58.749897   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:31:58.750353   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.750402   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.765128   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I0829 19:31:58.765502   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.765933   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.765952   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.766302   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.766478   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.768195   79869 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0829 19:31:58.769230   79869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:31:58.769562   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.769599   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.783969   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42955
	I0829 19:31:58.784329   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.784794   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.784809   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.785130   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.785335   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.821467   79869 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:31:58.822695   79869 start.go:297] selected driver: kvm2
	I0829 19:31:58.822708   79869 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.822845   79869 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:31:58.823799   79869 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.823887   79869 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:31:58.839098   79869 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:31:58.839445   79869 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:31:58.839504   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:31:58.839519   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:31:58.839556   79869 start.go:340] cluster config:
	{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.839650   79869 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.841263   79869 out.go:177] * Starting "old-k8s-version-467349" primary control-plane node in "old-k8s-version-467349" cluster
	I0829 19:31:58.842265   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:31:58.842301   79869 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:31:58.842310   79869 cache.go:56] Caching tarball of preloaded images
	I0829 19:31:58.842386   79869 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:31:58.842396   79869 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0829 19:31:58.842476   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:31:58.842637   79869 start.go:360] acquireMachinesLock for old-k8s-version-467349: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:32:00.606343   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:03.678411   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:09.758354   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:12.830416   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:18.910387   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:21.982407   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:28.062408   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:31.134407   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:37.214369   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:40.286345   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:46.366360   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:49.438406   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:55.518437   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:58.590377   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:04.670397   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:07.742436   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:13.822348   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:16.894422   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:22.974353   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:26.046337   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:32.126325   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:35.198391   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:41.278353   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:44.350421   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:50.434297   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:53.502296   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:59.582448   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:02.654443   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:08.734358   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:11.806435   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:17.886372   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:20.958351   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:27.038356   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:30.110387   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:33.114600   79073 start.go:364] duration metric: took 4m24.136110592s to acquireMachinesLock for "embed-certs-920571"
	I0829 19:34:33.114658   79073 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:34:33.114666   79073 fix.go:54] fixHost starting: 
	I0829 19:34:33.115014   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:33.115043   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:33.130652   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34641
	I0829 19:34:33.131096   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:33.131536   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:34:33.131555   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:33.131871   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:33.132060   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:33.132217   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:34:33.133784   79073 fix.go:112] recreateIfNeeded on embed-certs-920571: state=Stopped err=<nil>
	I0829 19:34:33.133809   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	W0829 19:34:33.133951   79073 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:34:33.135573   79073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-920571" ...
	I0829 19:34:33.136726   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Start
	I0829 19:34:33.136873   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring networks are active...
	I0829 19:34:33.137613   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring network default is active
	I0829 19:34:33.137909   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring network mk-embed-certs-920571 is active
	I0829 19:34:33.138400   79073 main.go:141] libmachine: (embed-certs-920571) Getting domain xml...
	I0829 19:34:33.139091   79073 main.go:141] libmachine: (embed-certs-920571) Creating domain...
	I0829 19:34:33.112327   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:34:33.112363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:34:33.112705   78865 buildroot.go:166] provisioning hostname "no-preload-690795"
	I0829 19:34:33.112736   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:34:33.112943   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:34:33.114457   78865 machine.go:96] duration metric: took 4m37.430735456s to provisionDockerMachine
	I0829 19:34:33.114505   78865 fix.go:56] duration metric: took 4m37.452542806s for fixHost
	I0829 19:34:33.114516   78865 start.go:83] releasing machines lock for "no-preload-690795", held for 4m37.452590646s
	W0829 19:34:33.114545   78865 start.go:714] error starting host: provision: host is not running
	W0829 19:34:33.114637   78865 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 19:34:33.114647   78865 start.go:729] Will try again in 5 seconds ...
	I0829 19:34:34.366249   79073 main.go:141] libmachine: (embed-certs-920571) Waiting to get IP...
	I0829 19:34:34.367233   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:34.367595   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:34.367671   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:34.367580   80412 retry.go:31] will retry after 294.1031ms: waiting for machine to come up
	I0829 19:34:34.663229   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:34.663677   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:34.663709   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:34.663624   80412 retry.go:31] will retry after 345.352879ms: waiting for machine to come up
	I0829 19:34:35.010102   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.010576   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.010604   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.010527   80412 retry.go:31] will retry after 295.49024ms: waiting for machine to come up
	I0829 19:34:35.308077   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.308580   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.308608   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.308525   80412 retry.go:31] will retry after 575.095429ms: waiting for machine to come up
	I0829 19:34:35.885400   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.885806   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.885835   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.885762   80412 retry.go:31] will retry after 524.463725ms: waiting for machine to come up
	I0829 19:34:36.411496   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:36.411840   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:36.411866   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:36.411802   80412 retry.go:31] will retry after 672.277111ms: waiting for machine to come up
	I0829 19:34:37.085978   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:37.086512   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:37.086537   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:37.086473   80412 retry.go:31] will retry after 1.185875442s: waiting for machine to come up
	I0829 19:34:38.274401   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:38.274881   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:38.274914   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:38.274827   80412 retry.go:31] will retry after 1.426721352s: waiting for machine to come up
	I0829 19:34:38.116486   78865 start.go:360] acquireMachinesLock for no-preload-690795: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:34:39.703333   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:39.703732   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:39.703756   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:39.703691   80412 retry.go:31] will retry after 1.500429564s: waiting for machine to come up
	I0829 19:34:41.206311   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:41.206854   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:41.206882   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:41.206766   80412 retry.go:31] will retry after 2.021866027s: waiting for machine to come up
	I0829 19:34:43.230915   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:43.231329   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:43.231382   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:43.231318   80412 retry.go:31] will retry after 2.415112477s: waiting for machine to come up
	I0829 19:34:45.649815   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:45.650169   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:45.650221   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:45.650140   80412 retry.go:31] will retry after 3.292956483s: waiting for machine to come up
	I0829 19:34:50.094786   79559 start.go:364] duration metric: took 3m31.488453615s to acquireMachinesLock for "default-k8s-diff-port-672127"
	I0829 19:34:50.094847   79559 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:34:50.094857   79559 fix.go:54] fixHost starting: 
	I0829 19:34:50.095330   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:50.095367   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:50.112044   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0829 19:34:50.112510   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:50.112941   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:34:50.112964   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:50.113325   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:50.113522   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:34:50.113663   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:34:50.115335   79559 fix.go:112] recreateIfNeeded on default-k8s-diff-port-672127: state=Stopped err=<nil>
	I0829 19:34:50.115378   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	W0829 19:34:50.115548   79559 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:34:50.117176   79559 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-672127" ...
	I0829 19:34:48.944274   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.944748   79073 main.go:141] libmachine: (embed-certs-920571) Found IP for machine: 192.168.61.243
	I0829 19:34:48.944776   79073 main.go:141] libmachine: (embed-certs-920571) Reserving static IP address...
	I0829 19:34:48.944793   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has current primary IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.945167   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "embed-certs-920571", mac: "52:54:00:35:28:22", ip: "192.168.61.243"} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:48.945195   79073 main.go:141] libmachine: (embed-certs-920571) Reserved static IP address: 192.168.61.243
	I0829 19:34:48.945214   79073 main.go:141] libmachine: (embed-certs-920571) DBG | skip adding static IP to network mk-embed-certs-920571 - found existing host DHCP lease matching {name: "embed-certs-920571", mac: "52:54:00:35:28:22", ip: "192.168.61.243"}
	I0829 19:34:48.945225   79073 main.go:141] libmachine: (embed-certs-920571) Waiting for SSH to be available...
	I0829 19:34:48.945236   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Getting to WaitForSSH function...
	I0829 19:34:48.947646   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.948004   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:48.948034   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.948132   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Using SSH client type: external
	I0829 19:34:48.948162   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa (-rw-------)
	I0829 19:34:48.948280   79073 main.go:141] libmachine: (embed-certs-920571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:34:48.948307   79073 main.go:141] libmachine: (embed-certs-920571) DBG | About to run SSH command:
	I0829 19:34:48.948328   79073 main.go:141] libmachine: (embed-certs-920571) DBG | exit 0
	I0829 19:34:49.073781   79073 main.go:141] libmachine: (embed-certs-920571) DBG | SSH cmd err, output: <nil>: 
	I0829 19:34:49.074184   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetConfigRaw
	I0829 19:34:49.074813   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:49.077014   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.077349   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.077369   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.077550   79073 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/config.json ...
	I0829 19:34:49.077724   79073 machine.go:93] provisionDockerMachine start ...
	I0829 19:34:49.077739   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:49.077936   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.080112   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.080448   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.080472   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.080548   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.080715   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.080853   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.080983   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.081110   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.081294   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.081306   79073 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:34:49.182232   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:34:49.182282   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.182556   79073 buildroot.go:166] provisioning hostname "embed-certs-920571"
	I0829 19:34:49.182582   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.182783   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.185368   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.185727   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.185751   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.185901   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.186077   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.186237   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.186379   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.186505   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.186721   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.186740   79073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-920571 && echo "embed-certs-920571" | sudo tee /etc/hostname
	I0829 19:34:49.300225   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-920571
	
	I0829 19:34:49.300261   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.303129   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.303497   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.303528   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.303682   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.303883   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.304061   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.304193   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.304466   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.304650   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.304667   79073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-920571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-920571/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-920571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:34:49.413678   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:34:49.413710   79073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:34:49.413765   79073 buildroot.go:174] setting up certificates
	I0829 19:34:49.413774   79073 provision.go:84] configureAuth start
	I0829 19:34:49.413786   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.414069   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:49.416618   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.416965   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.416993   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.417143   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.419308   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.419585   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.419630   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.419746   79073 provision.go:143] copyHostCerts
	I0829 19:34:49.419802   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:34:49.419820   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:34:49.419882   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:34:49.419973   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:34:49.419981   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:34:49.420005   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:34:49.420055   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:34:49.420063   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:34:49.420083   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:34:49.420129   79073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.embed-certs-920571 san=[127.0.0.1 192.168.61.243 embed-certs-920571 localhost minikube]
	I0829 19:34:49.488345   79073 provision.go:177] copyRemoteCerts
	I0829 19:34:49.488396   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:34:49.488418   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.490954   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.491290   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.491328   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.491473   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.491667   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.491794   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.491932   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:49.571847   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:34:49.594401   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 19:34:49.615988   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:34:49.638030   79073 provision.go:87] duration metric: took 224.241128ms to configureAuth
	I0829 19:34:49.638058   79073 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:34:49.638251   79073 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:34:49.638342   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.640876   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.641214   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.641244   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.641439   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.641662   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.641818   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.641941   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.642126   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.642292   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.642307   79073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:34:49.862247   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:34:49.862276   79073 machine.go:96] duration metric: took 784.541058ms to provisionDockerMachine
	I0829 19:34:49.862286   79073 start.go:293] postStartSetup for "embed-certs-920571" (driver="kvm2")
	I0829 19:34:49.862296   79073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:34:49.862325   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:49.862632   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:34:49.862660   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.865463   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.865871   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.865899   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.866068   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.866285   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.866459   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.866644   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:49.948826   79073 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:34:49.952779   79073 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:34:49.952800   79073 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:34:49.952858   79073 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:34:49.952935   79073 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:34:49.953034   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:34:49.962083   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:34:49.986910   79073 start.go:296] duration metric: took 124.612025ms for postStartSetup
	I0829 19:34:49.986944   79073 fix.go:56] duration metric: took 16.872279139s for fixHost
	I0829 19:34:49.986964   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.989581   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.989919   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.989946   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.990080   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.990281   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.990519   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.990662   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.990835   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.991009   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.991020   79073 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:34:50.094598   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960090.067799977
	
	I0829 19:34:50.094618   79073 fix.go:216] guest clock: 1724960090.067799977
	I0829 19:34:50.094626   79073 fix.go:229] Guest: 2024-08-29 19:34:50.067799977 +0000 UTC Remote: 2024-08-29 19:34:49.98694779 +0000 UTC m=+281.148944887 (delta=80.852187ms)
	I0829 19:34:50.094667   79073 fix.go:200] guest clock delta is within tolerance: 80.852187ms
	I0829 19:34:50.094672   79073 start.go:83] releasing machines lock for "embed-certs-920571", held for 16.98003549s
	I0829 19:34:50.094697   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.094962   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:50.097867   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.098301   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.098331   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.098494   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099007   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099190   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099276   79073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:34:50.099322   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:50.099429   79073 ssh_runner.go:195] Run: cat /version.json
	I0829 19:34:50.099453   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:50.101909   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.101932   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102283   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.102311   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102342   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.102363   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102460   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:50.102556   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:50.102647   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:50.102717   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:50.102818   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:50.102899   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:50.102964   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:50.103032   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:50.178744   79073 ssh_runner.go:195] Run: systemctl --version
	I0829 19:34:50.220024   79073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:34:50.370308   79073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:34:50.379363   79073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:34:50.379435   79073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:34:50.394787   79073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:34:50.394810   79073 start.go:495] detecting cgroup driver to use...
	I0829 19:34:50.394886   79073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:34:50.410061   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:34:50.423846   79073 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:34:50.423910   79073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:34:50.437117   79073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:34:50.450318   79073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:34:50.563588   79073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:34:50.706261   79073 docker.go:233] disabling docker service ...
	I0829 19:34:50.706356   79073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:34:50.721443   79073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:34:50.734284   79073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:34:50.871611   79073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:34:51.006487   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:34:51.019543   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:34:51.036398   79073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:34:51.036444   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.045884   79073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:34:51.045931   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.055634   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.065379   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.075104   79073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:34:51.085560   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.095777   79073 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.114679   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.125695   79073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:34:51.135263   79073 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:34:51.135328   79073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:34:51.148534   79073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:34:51.158658   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:34:51.281185   79073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:34:51.378558   79073 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:34:51.378618   79073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:34:51.383580   79073 start.go:563] Will wait 60s for crictl version
	I0829 19:34:51.383638   79073 ssh_runner.go:195] Run: which crictl
	I0829 19:34:51.387081   79073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:34:51.426413   79073 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:34:51.426491   79073 ssh_runner.go:195] Run: crio --version
	I0829 19:34:51.453777   79073 ssh_runner.go:195] Run: crio --version
	I0829 19:34:51.481306   79073 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:34:50.118500   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Start
	I0829 19:34:50.118776   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring networks are active...
	I0829 19:34:50.119618   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring network default is active
	I0829 19:34:50.120105   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring network mk-default-k8s-diff-port-672127 is active
	I0829 19:34:50.120501   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Getting domain xml...
	I0829 19:34:50.121238   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Creating domain...
	I0829 19:34:51.414344   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting to get IP...
	I0829 19:34:51.415308   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.415709   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.415790   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:51.415692   80540 retry.go:31] will retry after 256.92247ms: waiting for machine to come up
	I0829 19:34:51.674173   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.674728   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.674754   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:51.674670   80540 retry.go:31] will retry after 338.812858ms: waiting for machine to come up
	I0829 19:34:52.015450   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.015977   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.016009   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.015920   80540 retry.go:31] will retry after 385.497306ms: waiting for machine to come up
	I0829 19:34:52.403718   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.404324   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.404361   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.404259   80540 retry.go:31] will retry after 536.615454ms: waiting for machine to come up
	I0829 19:34:52.943166   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.943709   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.943736   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.943678   80540 retry.go:31] will retry after 584.895039ms: waiting for machine to come up
	I0829 19:34:51.482485   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:51.485272   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:51.485599   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:51.485632   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:51.485803   79073 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 19:34:51.490493   79073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:34:51.505212   79073 kubeadm.go:883] updating cluster {Name:embed-certs-920571 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:34:51.505359   79073 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:34:51.505413   79073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:34:51.539415   79073 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:34:51.539485   79073 ssh_runner.go:195] Run: which lz4
	I0829 19:34:51.543107   79073 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:34:51.546831   79073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:34:51.546864   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:34:52.815579   79073 crio.go:462] duration metric: took 1.272496626s to copy over tarball
	I0829 19:34:52.815659   79073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:34:53.530873   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:53.531510   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:53.531540   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:53.531452   80540 retry.go:31] will retry after 790.882954ms: waiting for machine to come up
	I0829 19:34:54.324385   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:54.324785   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:54.324817   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:54.324706   80540 retry.go:31] will retry after 815.842176ms: waiting for machine to come up
	I0829 19:34:55.142878   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:55.143350   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:55.143375   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:55.143325   80540 retry.go:31] will retry after 1.177682749s: waiting for machine to come up
	I0829 19:34:56.322780   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:56.323215   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:56.323248   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:56.323160   80540 retry.go:31] will retry after 1.158169512s: waiting for machine to come up
	I0829 19:34:57.483529   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:57.483990   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:57.484023   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:57.483917   80540 retry.go:31] will retry after 1.631842784s: waiting for machine to come up
	I0829 19:34:54.931044   79073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.115353131s)
	I0829 19:34:54.931077   79073 crio.go:469] duration metric: took 2.115468165s to extract the tarball
	I0829 19:34:54.931086   79073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:34:54.967902   79073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:34:55.006987   79073 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:34:55.007010   79073 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:34:55.007017   79073 kubeadm.go:934] updating node { 192.168.61.243 8443 v1.31.0 crio true true} ...
	I0829 19:34:55.007123   79073 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-920571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:34:55.007187   79073 ssh_runner.go:195] Run: crio config
	I0829 19:34:55.051987   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:34:55.052016   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:34:55.052039   79073 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:34:55.052077   79073 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-920571 NodeName:embed-certs-920571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:34:55.052254   79073 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-920571"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:34:55.052337   79073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:34:55.061509   79073 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:34:55.061586   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:34:55.070182   79073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 19:34:55.086180   79073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:34:55.103184   79073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 19:34:55.119226   79073 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0829 19:34:55.122845   79073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:34:55.133782   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:34:55.266431   79073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:34:55.283043   79073 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571 for IP: 192.168.61.243
	I0829 19:34:55.283066   79073 certs.go:194] generating shared ca certs ...
	I0829 19:34:55.283081   79073 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:34:55.283237   79073 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:34:55.283287   79073 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:34:55.283297   79073 certs.go:256] generating profile certs ...
	I0829 19:34:55.283438   79073 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/client.key
	I0829 19:34:55.283519   79073 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.key.dda9dcff
	I0829 19:34:55.283573   79073 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.key
	I0829 19:34:55.283708   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:34:55.283773   79073 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:34:55.283793   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:34:55.283831   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:34:55.283869   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:34:55.283901   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:34:55.283957   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:34:55.284835   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:34:55.330384   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:34:55.366718   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:34:55.393815   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:34:55.436855   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 19:34:55.463343   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:34:55.487693   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:34:55.511657   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:34:55.536017   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:34:55.558298   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:34:55.579840   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:34:55.601271   79073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:34:55.616634   79073 ssh_runner.go:195] Run: openssl version
	I0829 19:34:55.621890   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:34:55.633224   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.637431   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.637486   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.643034   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:34:55.654607   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:34:55.666297   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.670433   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.670492   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.675787   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:34:55.686953   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:34:55.697241   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.701133   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.701189   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.706242   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:34:55.716165   79073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:34:55.720159   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:34:55.727612   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:34:55.734806   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:34:55.742352   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:34:55.749483   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:34:55.756543   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:34:55.763413   79073 kubeadm.go:392] StartCluster: {Name:embed-certs-920571 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:34:55.763499   79073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:34:55.763537   79073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:34:55.803136   79073 cri.go:89] found id: ""
	I0829 19:34:55.803219   79073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:34:55.812851   79073 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:34:55.812868   79073 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:34:55.812907   79073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:34:55.823461   79073 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:34:55.824969   79073 kubeconfig.go:125] found "embed-certs-920571" server: "https://192.168.61.243:8443"
	I0829 19:34:55.828095   79073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:34:55.838579   79073 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.243
	I0829 19:34:55.838616   79073 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:34:55.838626   79073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:34:55.838669   79073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:34:55.876618   79073 cri.go:89] found id: ""
	I0829 19:34:55.876674   79073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:34:55.893401   79073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:34:55.902557   79073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:34:55.902579   79073 kubeadm.go:157] found existing configuration files:
	
	I0829 19:34:55.902631   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:34:55.911349   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:34:55.911407   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:34:55.920377   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:34:55.928764   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:34:55.928824   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:34:55.937630   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:34:55.945836   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:34:55.945897   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:34:55.954491   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:34:55.962466   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:34:55.962517   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:34:55.971080   79073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:34:55.979709   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:56.086301   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.378119   79073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.29178222s)
	I0829 19:34:57.378153   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.574026   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.655499   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.755371   79073 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:34:57.755457   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:58.255939   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:58.755813   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.117916   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:59.118404   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:59.118427   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:59.118355   80540 retry.go:31] will retry after 2.806936823s: waiting for machine to come up
	I0829 19:35:01.927079   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:01.927450   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:35:01.927473   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:35:01.927422   80540 retry.go:31] will retry after 3.008556566s: waiting for machine to come up
	I0829 19:34:59.255536   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.756296   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.802484   79073 api_server.go:72] duration metric: took 2.047112988s to wait for apiserver process to appear ...
	I0829 19:34:59.802516   79073 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:34:59.802537   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:34:59.803088   79073 api_server.go:269] stopped: https://192.168.61.243:8443/healthz: Get "https://192.168.61.243:8443/healthz": dial tcp 192.168.61.243:8443: connect: connection refused
	I0829 19:35:00.302707   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.439793   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:02.439825   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:02.439837   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.482217   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:02.482245   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:02.802617   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.811079   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:02.811116   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:03.303128   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:03.307613   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:03.307657   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:03.803189   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:03.809164   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0829 19:35:03.816623   79073 api_server.go:141] control plane version: v1.31.0
	I0829 19:35:03.816649   79073 api_server.go:131] duration metric: took 4.014126212s to wait for apiserver health ...
	I0829 19:35:03.816657   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:35:03.816664   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:03.818484   79073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:35:03.819706   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:35:03.833365   79073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:35:03.851607   79073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:35:03.861274   79073 system_pods.go:59] 8 kube-system pods found
	I0829 19:35:03.861313   79073 system_pods.go:61] "coredns-6f6b679f8f-2wrn6" [05e03841-faab-4fd4-88c9-199b39a71ba6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:35:03.861320   79073 system_pods.go:61] "etcd-embed-certs-920571" [5545a51a-3b76-4b39-b347-6f68b8d7edbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:35:03.861328   79073 system_pods.go:61] "kube-apiserver-embed-certs-920571" [cecb3e4e-9d55-4dc9-8d14-884ffbf56475] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:35:03.861334   79073 system_pods.go:61] "kube-controller-manager-embed-certs-920571" [77e06ace-0262-418f-b41c-700aabf2fa1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:35:03.861338   79073 system_pods.go:61] "kube-proxy-hflpk" [a57a1785-8ccf-4955-b5b2-19c72032d9f5] Running
	I0829 19:35:03.861353   79073 system_pods.go:61] "kube-scheduler-embed-certs-920571" [bdb2ed9c-3bf2-4e91-b6a4-ba947dab93ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:35:03.861359   79073 system_pods.go:61] "metrics-server-6867b74b74-xs5gp" [98380519-4a65-4208-b9cc-f1941a5c2f01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:35:03.861362   79073 system_pods.go:61] "storage-provisioner" [d18a769f-283f-4db3-aad0-82fc0267980f] Running
	I0829 19:35:03.861368   79073 system_pods.go:74] duration metric: took 9.738329ms to wait for pod list to return data ...
	I0829 19:35:03.861375   79073 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:35:03.865311   79073 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:35:03.865341   79073 node_conditions.go:123] node cpu capacity is 2
	I0829 19:35:03.865355   79073 node_conditions.go:105] duration metric: took 3.974661ms to run NodePressure ...
	I0829 19:35:03.865373   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:04.939084   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:04.939532   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:35:04.939567   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:35:04.939479   80540 retry.go:31] will retry after 3.738266407s: waiting for machine to come up
	I0829 19:35:04.123411   79073 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:35:04.127613   79073 kubeadm.go:739] kubelet initialised
	I0829 19:35:04.127639   79073 kubeadm.go:740] duration metric: took 4.197494ms waiting for restarted kubelet to initialise ...
	I0829 19:35:04.127649   79073 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:35:04.132339   79073 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.136884   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.136909   79073 pod_ready.go:82] duration metric: took 4.548897ms for pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.136917   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.136927   79073 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.141014   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "etcd-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.141037   79073 pod_ready.go:82] duration metric: took 4.103179ms for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.141048   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "etcd-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.141062   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.144778   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.144799   79073 pod_ready.go:82] duration metric: took 3.728001ms for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.144807   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.144812   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.255204   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.255227   79073 pod_ready.go:82] duration metric: took 110.408053ms for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.255247   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.255253   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hflpk" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.656086   79073 pod_ready.go:93] pod "kube-proxy-hflpk" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:04.656124   79073 pod_ready.go:82] duration metric: took 400.860776ms for pod "kube-proxy-hflpk" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.656137   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:06.674533   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:09.990963   79869 start.go:364] duration metric: took 3m11.14829615s to acquireMachinesLock for "old-k8s-version-467349"
	I0829 19:35:09.991026   79869 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:09.991035   79869 fix.go:54] fixHost starting: 
	I0829 19:35:09.991429   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:09.991472   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:10.011456   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0829 19:35:10.011867   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:10.012413   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:35:10.012445   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:10.012752   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:10.012960   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:10.013132   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetState
	I0829 19:35:10.014878   79869 fix.go:112] recreateIfNeeded on old-k8s-version-467349: state=Stopped err=<nil>
	I0829 19:35:10.014907   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	W0829 19:35:10.015055   79869 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:10.016684   79869 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467349" ...
	I0829 19:35:08.681559   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.682042   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Found IP for machine: 192.168.50.70
	I0829 19:35:08.682070   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has current primary IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.682080   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Reserving static IP address...
	I0829 19:35:08.682524   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Reserved static IP address: 192.168.50.70
	I0829 19:35:08.682564   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-672127", mac: "52:54:00:db:a8:cf", ip: "192.168.50.70"} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.682580   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for SSH to be available...
	I0829 19:35:08.682609   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | skip adding static IP to network mk-default-k8s-diff-port-672127 - found existing host DHCP lease matching {name: "default-k8s-diff-port-672127", mac: "52:54:00:db:a8:cf", ip: "192.168.50.70"}
	I0829 19:35:08.682623   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Getting to WaitForSSH function...
	I0829 19:35:08.684466   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.684816   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.684876   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.684957   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Using SSH client type: external
	I0829 19:35:08.684982   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa (-rw-------)
	I0829 19:35:08.685032   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:08.685053   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | About to run SSH command:
	I0829 19:35:08.685069   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | exit 0
	I0829 19:35:08.806174   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:08.806493   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetConfigRaw
	I0829 19:35:08.807134   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:08.809574   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.809900   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.809924   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.810227   79559 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/config.json ...
	I0829 19:35:08.810457   79559 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:08.810478   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:08.810675   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:08.812964   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.813337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.813368   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.813620   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:08.813815   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.813994   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.814161   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:08.814338   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:08.814533   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:08.814544   79559 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:08.914370   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:08.914415   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:08.914742   79559 buildroot.go:166] provisioning hostname "default-k8s-diff-port-672127"
	I0829 19:35:08.914782   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:08.914975   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:08.918471   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.918829   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.918857   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.919021   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:08.919186   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.919373   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.919483   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:08.919664   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:08.919865   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:08.919884   79559 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-672127 && echo "default-k8s-diff-port-672127" | sudo tee /etc/hostname
	I0829 19:35:09.032573   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-672127
	
	I0829 19:35:09.032606   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.035434   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.035811   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.035840   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.035999   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.036182   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.036350   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.036465   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.036651   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.036833   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.036852   79559 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-672127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-672127/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-672127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:09.142908   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:09.142937   79559 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:09.142978   79559 buildroot.go:174] setting up certificates
	I0829 19:35:09.142995   79559 provision.go:84] configureAuth start
	I0829 19:35:09.143010   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:09.143258   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:09.145947   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.146313   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.146339   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.146460   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.148631   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.148953   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.148978   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.149128   79559 provision.go:143] copyHostCerts
	I0829 19:35:09.149188   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:09.149204   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:09.149261   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:09.149368   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:09.149378   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:09.149400   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:09.149492   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:09.149501   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:09.149520   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:09.149578   79559 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-672127 san=[127.0.0.1 192.168.50.70 default-k8s-diff-port-672127 localhost minikube]
	I0829 19:35:09.370220   79559 provision.go:177] copyRemoteCerts
	I0829 19:35:09.370277   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:09.370301   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.373233   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.373723   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.373756   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.373966   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.374180   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.374342   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.374496   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.457104   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:35:09.481139   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:09.504611   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 19:35:09.529597   79559 provision.go:87] duration metric: took 386.586301ms to configureAuth
	I0829 19:35:09.529628   79559 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:09.529887   79559 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:35:09.529989   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.532809   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.533309   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.533342   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.533509   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.533743   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.533965   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.534169   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.534372   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.534523   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.534545   79559 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:09.754724   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:09.754752   79559 machine.go:96] duration metric: took 944.279776ms to provisionDockerMachine
	I0829 19:35:09.754766   79559 start.go:293] postStartSetup for "default-k8s-diff-port-672127" (driver="kvm2")
	I0829 19:35:09.754781   79559 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:09.754807   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.755236   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:09.755270   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.757713   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.758079   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.758125   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.758274   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.758466   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.758682   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.758823   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.841022   79559 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:09.846051   79559 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:09.846081   79559 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:09.846163   79559 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:09.846254   79559 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:09.846379   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:09.857443   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:09.884662   79559 start.go:296] duration metric: took 129.87923ms for postStartSetup
	I0829 19:35:09.884715   79559 fix.go:56] duration metric: took 19.789853711s for fixHost
	I0829 19:35:09.884739   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.888011   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.888562   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.888593   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.888789   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.888976   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.889188   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.889347   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.889533   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.889723   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.889736   79559 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:09.990749   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960109.967111721
	
	I0829 19:35:09.990772   79559 fix.go:216] guest clock: 1724960109.967111721
	I0829 19:35:09.990782   79559 fix.go:229] Guest: 2024-08-29 19:35:09.967111721 +0000 UTC Remote: 2024-08-29 19:35:09.884720437 +0000 UTC m=+231.415600706 (delta=82.391284ms)
	I0829 19:35:09.990835   79559 fix.go:200] guest clock delta is within tolerance: 82.391284ms
	I0829 19:35:09.990846   79559 start.go:83] releasing machines lock for "default-k8s-diff-port-672127", held for 19.896020367s
	I0829 19:35:09.990891   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.991180   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:09.994076   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.994434   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.994459   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.994613   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995121   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995318   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995407   79559 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:09.995464   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.995531   79559 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:09.995569   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.998302   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998673   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.998703   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998732   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.998750   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998832   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.998976   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.999026   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.999109   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.999162   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.999249   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.999404   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.999395   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:10.124503   79559 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:10.130734   79559 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:10.275859   79559 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:10.281662   79559 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:10.281728   79559 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:10.297464   79559 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:10.297488   79559 start.go:495] detecting cgroup driver to use...
	I0829 19:35:10.297553   79559 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:10.316686   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:10.332836   79559 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:10.332880   79559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:10.347021   79559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:10.364479   79559 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:10.506136   79559 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:10.659246   79559 docker.go:233] disabling docker service ...
	I0829 19:35:10.659324   79559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:10.678953   79559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:10.694844   79559 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:10.837509   79559 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:10.976512   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:10.993421   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:11.013434   79559 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:35:11.013492   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.023909   79559 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:11.023980   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.038560   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.049911   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.060235   79559 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:11.076772   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.093357   79559 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.110140   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.121770   79559 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:11.131641   79559 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:11.131697   79559 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:11.151460   79559 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:11.161320   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:11.286180   79559 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:11.382235   79559 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:11.382312   79559 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:11.388226   79559 start.go:563] Will wait 60s for crictl version
	I0829 19:35:11.388299   79559 ssh_runner.go:195] Run: which crictl
	I0829 19:35:11.391832   79559 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:11.429509   79559 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:11.429601   79559 ssh_runner.go:195] Run: crio --version
	I0829 19:35:11.457180   79559 ssh_runner.go:195] Run: crio --version
	I0829 19:35:11.487106   79559 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:35:11.488483   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:11.491607   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:11.491988   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:11.492027   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:11.492316   79559 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:11.496448   79559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:11.512045   79559 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-672127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:11.512159   79559 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:35:11.512219   79559 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:11.549212   79559 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:35:11.549287   79559 ssh_runner.go:195] Run: which lz4
	I0829 19:35:11.554151   79559 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:11.558691   79559 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:11.558718   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:35:12.826290   79559 crio.go:462] duration metric: took 1.272173781s to copy over tarball
	I0829 19:35:12.826387   79559 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:10.017965   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .Start
	I0829 19:35:10.018195   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring networks are active...
	I0829 19:35:10.018992   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network default is active
	I0829 19:35:10.019360   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network mk-old-k8s-version-467349 is active
	I0829 19:35:10.019708   79869 main.go:141] libmachine: (old-k8s-version-467349) Getting domain xml...
	I0829 19:35:10.020408   79869 main.go:141] libmachine: (old-k8s-version-467349) Creating domain...
	I0829 19:35:11.298443   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting to get IP...
	I0829 19:35:11.299521   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.300063   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.300152   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.300048   80714 retry.go:31] will retry after 253.519755ms: waiting for machine to come up
	I0829 19:35:11.555694   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.556242   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.556274   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.556187   80714 retry.go:31] will retry after 375.22671ms: waiting for machine to come up
	I0829 19:35:11.932780   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.933206   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.933233   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.933176   80714 retry.go:31] will retry after 329.139276ms: waiting for machine to come up
	I0829 19:35:12.263804   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.264471   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.264501   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.264437   80714 retry.go:31] will retry after 434.457682ms: waiting for machine to come up
	I0829 19:35:12.701184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.701773   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.701805   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.701691   80714 retry.go:31] will retry after 555.961608ms: waiting for machine to come up
	I0829 19:35:13.259670   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:13.260159   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:13.260184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:13.260080   80714 retry.go:31] will retry after 814.491179ms: waiting for machine to come up
	I0829 19:35:09.162551   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:11.165654   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:13.662027   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:15.034221   79559 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.207800368s)
	I0829 19:35:15.034254   79559 crio.go:469] duration metric: took 2.207935139s to extract the tarball
	I0829 19:35:15.034263   79559 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:15.070411   79559 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:15.117649   79559 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:35:15.117675   79559 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:35:15.117684   79559 kubeadm.go:934] updating node { 192.168.50.70 8444 v1.31.0 crio true true} ...
	I0829 19:35:15.117793   79559 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-672127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:15.117873   79559 ssh_runner.go:195] Run: crio config
	I0829 19:35:15.161749   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:35:15.161778   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:15.161795   79559 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:15.161815   79559 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-672127 NodeName:default-k8s-diff-port-672127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:35:15.161949   79559 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-672127"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:15.162002   79559 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:35:15.171789   79559 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:15.171858   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:15.181011   79559 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0829 19:35:15.197394   79559 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:15.213309   79559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0829 19:35:15.231088   79559 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:15.234732   79559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:15.245700   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:15.368430   79559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:15.385792   79559 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127 for IP: 192.168.50.70
	I0829 19:35:15.385820   79559 certs.go:194] generating shared ca certs ...
	I0829 19:35:15.385844   79559 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:15.386020   79559 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:15.386108   79559 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:15.386123   79559 certs.go:256] generating profile certs ...
	I0829 19:35:15.386240   79559 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/client.key
	I0829 19:35:15.386324   79559 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.key.828c23de
	I0829 19:35:15.386378   79559 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.key
	I0829 19:35:15.386523   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:15.386567   79559 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:15.386582   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:15.386615   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:15.386650   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:15.386680   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:15.386736   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:15.387663   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:15.429474   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:15.470861   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:15.514906   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:15.552474   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 19:35:15.581749   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:15.605874   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:15.629703   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:35:15.653589   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:15.680222   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:15.706824   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:15.733354   79559 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:15.753069   79559 ssh_runner.go:195] Run: openssl version
	I0829 19:35:15.759905   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:15.770507   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.776103   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.776159   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.783674   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:15.797519   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:15.809517   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.814243   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.814311   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.819834   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:15.830130   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:15.840473   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.844974   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.845033   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.850619   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:15.860955   79559 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:15.865359   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:15.871149   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:15.876982   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:15.882635   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:15.888020   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:15.893423   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:15.898989   79559 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-672127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:15.899085   79559 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:15.899156   79559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:15.939743   79559 cri.go:89] found id: ""
	I0829 19:35:15.939817   79559 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:15.949877   79559 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:15.949896   79559 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:15.949938   79559 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:15.959436   79559 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:15.960417   79559 kubeconfig.go:125] found "default-k8s-diff-port-672127" server: "https://192.168.50.70:8444"
	I0829 19:35:15.962469   79559 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:15.971672   79559 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0829 19:35:15.971700   79559 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:15.971710   79559 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:15.971777   79559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:16.015084   79559 cri.go:89] found id: ""
	I0829 19:35:16.015173   79559 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:16.031614   79559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:16.044359   79559 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:16.044384   79559 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:16.044448   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 19:35:16.056073   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:16.056139   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:16.066426   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 19:35:16.075300   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:16.075368   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:16.084795   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 19:35:16.093739   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:16.093804   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:16.103539   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 19:35:16.112676   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:16.112744   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:16.121997   79559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:16.134461   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:16.246853   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.577230   79559 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.330337638s)
	I0829 19:35:17.577271   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.810593   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.892546   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.993500   79559 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:17.993595   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:18.494169   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:14.076091   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.076599   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.076622   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.076549   80714 retry.go:31] will retry after 864.469682ms: waiting for machine to come up
	I0829 19:35:14.942675   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.943123   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.943154   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.943068   80714 retry.go:31] will retry after 1.062037578s: waiting for machine to come up
	I0829 19:35:16.006750   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:16.007301   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:16.007336   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:16.007212   80714 retry.go:31] will retry after 1.22747505s: waiting for machine to come up
	I0829 19:35:17.236788   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:17.237262   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:17.237291   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:17.237216   80714 retry.go:31] will retry after 1.663870598s: waiting for machine to come up
	I0829 19:35:15.662198   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:16.162890   79073 pod_ready.go:93] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:16.162919   79073 pod_ready.go:82] duration metric: took 11.506772145s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:16.162931   79073 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:18.170586   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:18.994574   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:19.493764   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:19.509384   79559 api_server.go:72] duration metric: took 1.515882118s to wait for apiserver process to appear ...
	I0829 19:35:19.509415   79559 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:35:19.509440   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:21.555577   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:21.555625   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:21.555642   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:21.572445   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:21.572481   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:22.009612   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:22.017592   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:22.017627   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:22.510148   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:22.516104   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:22.516140   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:23.009648   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:23.016342   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 200:
	ok
	I0829 19:35:23.022852   79559 api_server.go:141] control plane version: v1.31.0
	I0829 19:35:23.022878   79559 api_server.go:131] duration metric: took 3.513455745s to wait for apiserver health ...
	I0829 19:35:23.022889   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:35:23.022897   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:23.024557   79559 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:35:23.025764   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:35:23.035743   79559 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:35:23.075272   79559 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:35:23.091948   79559 system_pods.go:59] 8 kube-system pods found
	I0829 19:35:23.091991   79559 system_pods.go:61] "coredns-6f6b679f8f-p92hj" [736e7c46-b945-445f-a404-20a609f766e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:35:23.092004   79559 system_pods.go:61] "etcd-default-k8s-diff-port-672127" [cf016602-46cd-4972-bdd3-1ef5d881b6e0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:35:23.092014   79559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-672127" [eb51ac87-f5e4-4031-84fe-811da2ff8d63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:35:23.092026   79559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-672127" [caf7b777-935f-4351-b58d-60bb8175bec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:35:23.092034   79559 system_pods.go:61] "kube-proxy-tlc89" [9a11e5a6-b624-494b-8e94-d362b94fb98e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 19:35:23.092043   79559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-672127" [fe83e2af-b046-4d56-9b5c-d7a17db7e854] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:35:23.092053   79559 system_pods.go:61] "metrics-server-6867b74b74-tbkxg" [6d8f8c92-4f89-4a2a-8690-51a850768516] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:35:23.092065   79559 system_pods.go:61] "storage-provisioner" [7349bb79-c402-4587-ab0b-e52e5d455c61] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:35:23.092078   79559 system_pods.go:74] duration metric: took 16.779413ms to wait for pod list to return data ...
	I0829 19:35:23.092091   79559 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:35:23.099492   79559 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:35:23.099533   79559 node_conditions.go:123] node cpu capacity is 2
	I0829 19:35:23.099547   79559 node_conditions.go:105] duration metric: took 7.450351ms to run NodePressure ...
	I0829 19:35:23.099571   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:23.371279   79559 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:35:23.377322   79559 kubeadm.go:739] kubelet initialised
	I0829 19:35:23.377346   79559 kubeadm.go:740] duration metric: took 6.045074ms waiting for restarted kubelet to initialise ...
	I0829 19:35:23.377353   79559 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:35:23.384232   79559 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.391931   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.391960   79559 pod_ready.go:82] duration metric: took 7.702072ms for pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.391971   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.391980   79559 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.396708   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.396728   79559 pod_ready.go:82] duration metric: took 4.739691ms for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.396736   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.396744   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.401274   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.401298   79559 pod_ready.go:82] duration metric: took 4.546455ms for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.401308   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.401314   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:18.903082   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:18.903668   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:18.903691   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:18.903624   80714 retry.go:31] will retry after 2.012998698s: waiting for machine to come up
	I0829 19:35:20.918657   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:20.919143   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:20.919179   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:20.919066   80714 retry.go:31] will retry after 2.674645507s: waiting for machine to come up
	I0829 19:35:23.595218   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:23.595658   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:23.595685   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:23.595633   80714 retry.go:31] will retry after 3.052784769s: waiting for machine to come up
	I0829 19:35:20.670356   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:22.670699   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.786910   78865 start.go:364] duration metric: took 49.670356886s to acquireMachinesLock for "no-preload-690795"
	I0829 19:35:27.786963   78865 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:27.786975   78865 fix.go:54] fixHost starting: 
	I0829 19:35:27.787377   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:27.787425   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:27.803558   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0829 19:35:27.803903   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:27.804328   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:35:27.804348   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:27.804623   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:27.804824   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:27.804967   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:35:27.806332   78865 fix.go:112] recreateIfNeeded on no-preload-690795: state=Stopped err=<nil>
	I0829 19:35:27.806353   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	W0829 19:35:27.806525   78865 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:27.808678   78865 out.go:177] * Restarting existing kvm2 VM for "no-preload-690795" ...
	I0829 19:35:25.407622   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.910410   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:26.649643   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650117   79869 main.go:141] libmachine: (old-k8s-version-467349) Found IP for machine: 192.168.72.112
	I0829 19:35:26.650146   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserving static IP address...
	I0829 19:35:26.650161   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has current primary IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650553   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.650579   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserved static IP address: 192.168.72.112
	I0829 19:35:26.650600   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | skip adding static IP to network mk-old-k8s-version-467349 - found existing host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"}
	I0829 19:35:26.650611   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting for SSH to be available...
	I0829 19:35:26.650640   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Getting to WaitForSSH function...
	I0829 19:35:26.653157   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653509   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.653528   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653667   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH client type: external
	I0829 19:35:26.653690   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa (-rw-------)
	I0829 19:35:26.653724   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:26.653741   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | About to run SSH command:
	I0829 19:35:26.653755   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | exit 0
	I0829 19:35:26.778126   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:26.778436   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetConfigRaw
	I0829 19:35:26.779002   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:26.781392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.781745   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.781778   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.782006   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:35:26.782229   79869 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:26.782249   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:26.782509   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.784806   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785130   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.785148   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785300   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.785462   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785611   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785799   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.785923   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.786126   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.786138   79869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:26.886223   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:26.886256   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886522   79869 buildroot.go:166] provisioning hostname "old-k8s-version-467349"
	I0829 19:35:26.886563   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886756   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.889874   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890304   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.890324   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890471   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.890655   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890821   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890969   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.891131   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.891333   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.891348   79869 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467349 && echo "old-k8s-version-467349" | sudo tee /etc/hostname
	I0829 19:35:27.007493   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467349
	
	I0829 19:35:27.007535   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.010202   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010526   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.010548   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010737   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.010913   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011080   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011225   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.011395   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.011548   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.011564   79869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467349/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:27.123357   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:27.123385   79869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:27.123436   79869 buildroot.go:174] setting up certificates
	I0829 19:35:27.123445   79869 provision.go:84] configureAuth start
	I0829 19:35:27.123455   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:27.123760   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.126486   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.126819   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.126857   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.127013   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.129089   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129404   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.129429   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129554   79869 provision.go:143] copyHostCerts
	I0829 19:35:27.129614   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:27.129636   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:27.129704   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:27.129825   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:27.129840   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:27.129871   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:27.129946   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:27.129956   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:27.129982   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:27.130043   79869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467349 san=[127.0.0.1 192.168.72.112 localhost minikube old-k8s-version-467349]
	I0829 19:35:27.190556   79869 provision.go:177] copyRemoteCerts
	I0829 19:35:27.190610   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:27.190667   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.193785   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194205   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.194243   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194406   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.194620   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.194788   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.194962   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.276099   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:27.299820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 19:35:27.323625   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:27.347943   79869 provision.go:87] duration metric: took 224.487094ms to configureAuth
	I0829 19:35:27.347970   79869 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:27.348140   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:35:27.348203   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.351042   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.351420   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351654   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.351860   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352030   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352159   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.352321   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.352487   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.352504   79869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:27.565849   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:27.565874   79869 machine.go:96] duration metric: took 783.631791ms to provisionDockerMachine
	I0829 19:35:27.565886   79869 start.go:293] postStartSetup for "old-k8s-version-467349" (driver="kvm2")
	I0829 19:35:27.565897   79869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:27.565935   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.566274   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:27.566332   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.568900   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569225   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.569258   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569424   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.569613   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.569795   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.569961   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.648057   79869 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:27.651955   79869 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:27.651984   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:27.652057   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:27.652167   79869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:27.652311   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:27.660961   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:27.684179   79869 start.go:296] duration metric: took 118.281042ms for postStartSetup
	I0829 19:35:27.684251   79869 fix.go:56] duration metric: took 17.69321583s for fixHost
	I0829 19:35:27.684277   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.686877   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687235   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.687266   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687429   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.687615   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687751   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687863   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.687994   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.688202   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.688220   79869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:27.786754   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960127.745017542
	
	I0829 19:35:27.786773   79869 fix.go:216] guest clock: 1724960127.745017542
	I0829 19:35:27.786780   79869 fix.go:229] Guest: 2024-08-29 19:35:27.745017542 +0000 UTC Remote: 2024-08-29 19:35:27.684258077 +0000 UTC m=+208.981895804 (delta=60.759465ms)
	I0829 19:35:27.786798   79869 fix.go:200] guest clock delta is within tolerance: 60.759465ms
	I0829 19:35:27.786803   79869 start.go:83] releasing machines lock for "old-k8s-version-467349", held for 17.795804036s
	I0829 19:35:27.786823   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.787066   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.789617   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.789937   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.789967   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.790124   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790514   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790689   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790781   79869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:27.790827   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.790912   79869 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:27.790937   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.793406   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793495   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793732   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793762   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793781   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793821   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793910   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794075   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794076   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794242   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794419   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.794435   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794646   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794811   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.910665   79869 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:27.916917   79869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:28.063525   79869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:28.070848   79869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:28.070907   79869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:28.089204   79869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:28.089226   79869 start.go:495] detecting cgroup driver to use...
	I0829 19:35:28.089291   79869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:28.108528   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:28.122248   79869 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:28.122353   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:28.143014   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:28.159322   79869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:28.281356   79869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:28.445101   79869 docker.go:233] disabling docker service ...
	I0829 19:35:28.445162   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:28.460437   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:28.474849   79869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:28.609747   79869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:28.734733   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:25.170397   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.669465   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:28.748605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:28.766945   79869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 19:35:28.767014   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.776535   79869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:28.776598   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.787050   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.797552   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.807575   79869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:28.818319   79869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:28.827289   79869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:28.827342   79869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:28.839995   79869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:28.849779   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:28.979701   79869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:29.092264   79869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:29.092344   79869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:29.097310   79869 start.go:563] Will wait 60s for crictl version
	I0829 19:35:29.097366   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:29.101080   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:29.146142   79869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:29.146228   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.176037   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.210024   79869 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 19:35:27.810111   78865 main.go:141] libmachine: (no-preload-690795) Calling .Start
	I0829 19:35:27.810300   78865 main.go:141] libmachine: (no-preload-690795) Ensuring networks are active...
	I0829 19:35:27.811063   78865 main.go:141] libmachine: (no-preload-690795) Ensuring network default is active
	I0829 19:35:27.811464   78865 main.go:141] libmachine: (no-preload-690795) Ensuring network mk-no-preload-690795 is active
	I0829 19:35:27.811848   78865 main.go:141] libmachine: (no-preload-690795) Getting domain xml...
	I0829 19:35:27.812590   78865 main.go:141] libmachine: (no-preload-690795) Creating domain...
	I0829 19:35:29.131821   78865 main.go:141] libmachine: (no-preload-690795) Waiting to get IP...
	I0829 19:35:29.132876   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.133519   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.133595   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.133481   80876 retry.go:31] will retry after 252.123266ms: waiting for machine to come up
	I0829 19:35:29.387046   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.387534   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.387561   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.387496   80876 retry.go:31] will retry after 304.157394ms: waiting for machine to come up
	I0829 19:35:29.693891   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.694581   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.694603   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.694560   80876 retry.go:31] will retry after 366.980614ms: waiting for machine to come up
	I0829 19:35:30.063032   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:30.063466   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:30.063504   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:30.063431   80876 retry.go:31] will retry after 562.46082ms: waiting for machine to come up
	I0829 19:35:30.412868   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:32.908366   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:33.408823   79559 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.408848   79559 pod_ready.go:82] duration metric: took 10.007525744s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.408862   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tlc89" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.418176   79559 pod_ready.go:93] pod "kube-proxy-tlc89" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.418202   79559 pod_ready.go:82] duration metric: took 9.33136ms for pod "kube-proxy-tlc89" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.418214   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.424362   79559 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.424388   79559 pod_ready.go:82] duration metric: took 6.165646ms for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.424401   79559 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:29.211072   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:29.214489   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.214897   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:29.214932   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.215196   79869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:29.219742   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:29.233815   79869 kubeadm.go:883] updating cluster {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:29.233934   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:35:29.233994   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:29.281512   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:29.281579   79869 ssh_runner.go:195] Run: which lz4
	I0829 19:35:29.285825   79869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:29.290303   79869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:29.290349   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 19:35:30.843642   79869 crio.go:462] duration metric: took 1.557868582s to copy over tarball
	I0829 19:35:30.843714   79869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:29.670803   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:32.171154   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:30.627531   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:30.628123   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:30.628147   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:30.628030   80876 retry.go:31] will retry after 488.97189ms: waiting for machine to come up
	I0829 19:35:31.118901   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:31.119457   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:31.119480   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:31.119398   80876 retry.go:31] will retry after 801.189699ms: waiting for machine to come up
	I0829 19:35:31.921939   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:31.922447   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:31.922482   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:31.922391   80876 retry.go:31] will retry after 828.788864ms: waiting for machine to come up
	I0829 19:35:32.752986   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:32.753429   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:32.753465   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:32.753385   80876 retry.go:31] will retry after 1.404436811s: waiting for machine to come up
	I0829 19:35:34.159129   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:34.159714   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:34.159741   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:34.159678   80876 retry.go:31] will retry after 1.312099391s: waiting for machine to come up
	I0829 19:35:35.473045   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:35.473510   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:35.473549   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:35.473461   80876 retry.go:31] will retry after 1.46129368s: waiting for machine to come up
	I0829 19:35:35.431524   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:37.437993   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:33.827965   79869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984226389s)
	I0829 19:35:33.827993   79869 crio.go:469] duration metric: took 2.98432047s to extract the tarball
	I0829 19:35:33.828004   79869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:33.869606   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:33.902753   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:33.902782   79869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:33.902862   79869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.902867   79869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.902869   79869 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.902882   79869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:33.903054   79869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.903000   79869 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 19:35:33.902955   79869 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.902978   79869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.904938   79869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904960   79869 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 19:35:33.904917   79869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.904920   79869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.159604   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 19:35:34.195935   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.208324   79869 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 19:35:34.208373   79869 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 19:35:34.208414   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.229776   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.231728   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.241303   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.243523   79869 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 19:35:34.243572   79869 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.243589   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.243612   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.256377   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.291584   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.339295   79869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 19:35:34.339344   79869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.339396   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364510   79869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 19:35:34.364559   79869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.364565   79869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 19:35:34.364598   79869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.364608   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364636   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.364641   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.364642   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.370545   79869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 19:35:34.370580   79869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.370621   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.401578   79869 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 19:35:34.401628   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.401634   79869 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.401651   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.401669   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.452408   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.452472   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.452530   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.452479   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.498680   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.502698   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.502722   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.608235   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.608332   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 19:35:34.608345   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.608302   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.647702   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.647744   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.647784   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.771634   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.771691   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 19:35:34.771642   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.771742   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 19:35:34.771818   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 19:35:34.790517   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.826666   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 19:35:34.832449   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 19:35:34.850172   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 19:35:35.112084   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:35.251873   79869 cache_images.go:92] duration metric: took 1.34907399s to LoadCachedImages
	W0829 19:35:35.251967   79869 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0829 19:35:35.251984   79869 kubeadm.go:934] updating node { 192.168.72.112 8443 v1.20.0 crio true true} ...
	I0829 19:35:35.252130   79869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467349 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:35.252215   79869 ssh_runner.go:195] Run: crio config
	I0829 19:35:35.307174   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:35:35.307205   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:35.307229   79869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:35.307253   79869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467349 NodeName:old-k8s-version-467349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 19:35:35.307421   79869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467349"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:35.307498   79869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 19:35:35.317493   79869 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:35.317574   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:35.327102   79869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 19:35:35.343936   79869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:35.362420   79869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 19:35:35.379862   79869 ssh_runner.go:195] Run: grep 192.168.72.112	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:35.383595   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:35.396175   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:35.513069   79869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:35.535454   79869 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349 for IP: 192.168.72.112
	I0829 19:35:35.535481   79869 certs.go:194] generating shared ca certs ...
	I0829 19:35:35.535500   79869 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:35.535693   79869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:35.535751   79869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:35.535764   79869 certs.go:256] generating profile certs ...
	I0829 19:35:35.535885   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.key
	I0829 19:35:35.535962   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key.b97fdb0f
	I0829 19:35:35.536010   79869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key
	I0829 19:35:35.536160   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:35.536198   79869 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:35.536212   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:35.536255   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:35.536289   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:35.536345   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:35.536403   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:35.537270   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:35.573137   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:35.605232   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:35.633800   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:35.681773   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 19:35:35.711207   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:35.748040   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:35.774144   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:35:35.805029   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:35.833761   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:35.856820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:35.883402   79869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:35.902258   79869 ssh_runner.go:195] Run: openssl version
	I0829 19:35:35.908223   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:35.919106   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923368   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923414   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.930431   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:35.941856   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:35.953186   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957279   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957351   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.963886   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:35.976058   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:35.986836   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991417   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991482   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.997160   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:36.009731   79869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:36.015343   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:36.022897   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:36.028976   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:36.036658   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:36.042513   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:36.048085   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:36.053863   79869 kubeadm.go:392] StartCluster: {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:36.053944   79869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:36.053999   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.099158   79869 cri.go:89] found id: ""
	I0829 19:35:36.099230   79869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:36.109678   79869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:36.109701   79869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:36.109751   79869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:36.119674   79869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:36.120829   79869 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-467349" does not appear in /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:35:36.121495   79869 kubeconfig.go:62] /home/jenkins/minikube-integration/19531-13056/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-467349" cluster setting kubeconfig missing "old-k8s-version-467349" context setting]
	I0829 19:35:36.122505   79869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:36.221053   79869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:36.232505   79869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.112
	I0829 19:35:36.232550   79869 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:36.232562   79869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:36.232612   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.272228   79869 cri.go:89] found id: ""
	I0829 19:35:36.272290   79869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:36.290945   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:36.301665   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:36.301688   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:36.301740   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:35:36.311828   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:36.311882   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:36.322539   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:35:36.331879   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:36.331947   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:36.343057   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.352806   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:36.352867   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.362158   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:35:36.372280   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:36.372355   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:36.383178   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:36.393699   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:36.514064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.332360   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.570906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.665203   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.764043   79869 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:37.764146   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:38.264990   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:34.172082   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:36.669124   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:38.669696   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:36.936034   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:36.936510   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:36.936539   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:36.936463   80876 retry.go:31] will retry after 1.943807762s: waiting for machine to come up
	I0829 19:35:38.881644   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:38.882110   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:38.882133   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:38.882067   80876 retry.go:31] will retry after 3.173912619s: waiting for machine to come up
	I0829 19:35:39.932725   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:42.429439   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:38.764741   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.264314   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.765085   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.264910   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.264207   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.764841   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.265060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.764958   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:43.264971   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.168816   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:43.669594   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:42.059140   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:42.059668   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:42.059692   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:42.059602   80876 retry.go:31] will retry after 4.193427915s: waiting for machine to come up
	I0829 19:35:44.430473   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:46.431149   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:43.764674   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.264893   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.764345   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.264234   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.764985   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.265107   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.764222   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.264350   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.764787   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:48.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.671012   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:48.168836   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:46.256270   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.256783   78865 main.go:141] libmachine: (no-preload-690795) Found IP for machine: 192.168.39.76
	I0829 19:35:46.256806   78865 main.go:141] libmachine: (no-preload-690795) Reserving static IP address...
	I0829 19:35:46.256822   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has current primary IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.257249   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "no-preload-690795", mac: "52:54:00:2b:48:ed", ip: "192.168.39.76"} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.257274   78865 main.go:141] libmachine: (no-preload-690795) Reserved static IP address: 192.168.39.76
	I0829 19:35:46.257289   78865 main.go:141] libmachine: (no-preload-690795) DBG | skip adding static IP to network mk-no-preload-690795 - found existing host DHCP lease matching {name: "no-preload-690795", mac: "52:54:00:2b:48:ed", ip: "192.168.39.76"}
	I0829 19:35:46.257299   78865 main.go:141] libmachine: (no-preload-690795) Waiting for SSH to be available...
	I0829 19:35:46.257313   78865 main.go:141] libmachine: (no-preload-690795) DBG | Getting to WaitForSSH function...
	I0829 19:35:46.259334   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.259664   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.259692   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.259788   78865 main.go:141] libmachine: (no-preload-690795) DBG | Using SSH client type: external
	I0829 19:35:46.259821   78865 main.go:141] libmachine: (no-preload-690795) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa (-rw-------)
	I0829 19:35:46.259859   78865 main.go:141] libmachine: (no-preload-690795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:46.259871   78865 main.go:141] libmachine: (no-preload-690795) DBG | About to run SSH command:
	I0829 19:35:46.259902   78865 main.go:141] libmachine: (no-preload-690795) DBG | exit 0
	I0829 19:35:46.389869   78865 main.go:141] libmachine: (no-preload-690795) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:46.390295   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetConfigRaw
	I0829 19:35:46.390987   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:46.393890   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.394310   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.394342   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.394673   78865 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/config.json ...
	I0829 19:35:46.394846   78865 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:46.394869   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:46.395082   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.397203   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.397508   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.397535   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.397676   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.397862   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.398011   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.398178   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.398314   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.398475   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.398486   78865 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:46.502132   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:46.502163   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.502426   78865 buildroot.go:166] provisioning hostname "no-preload-690795"
	I0829 19:35:46.502449   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.502642   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.505084   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.505414   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.505443   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.505665   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.505861   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.506035   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.506219   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.506379   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.506573   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.506597   78865 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-690795 && echo "no-preload-690795" | sudo tee /etc/hostname
	I0829 19:35:46.627246   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-690795
	
	I0829 19:35:46.627269   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.630081   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.630430   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.630454   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.630611   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.630780   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.630947   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.631233   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.631397   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.631545   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.631568   78865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-690795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-690795/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-690795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:46.746055   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:46.746106   78865 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:46.746131   78865 buildroot.go:174] setting up certificates
	I0829 19:35:46.746143   78865 provision.go:84] configureAuth start
	I0829 19:35:46.746160   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.746411   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:46.749125   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.749476   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.749497   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.749642   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.751828   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.752178   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.752203   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.752317   78865 provision.go:143] copyHostCerts
	I0829 19:35:46.752384   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:46.752404   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:46.752475   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:46.752580   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:46.752591   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:46.752619   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:46.752693   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:46.752703   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:46.752728   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:46.752791   78865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.no-preload-690795 san=[127.0.0.1 192.168.39.76 localhost minikube no-preload-690795]
	I0829 19:35:46.901689   78865 provision.go:177] copyRemoteCerts
	I0829 19:35:46.901744   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:46.901764   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.904873   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.905241   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.905287   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.905458   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.905657   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.905805   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.905960   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:46.988181   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:47.011149   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 19:35:47.034849   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:47.057375   78865 provision.go:87] duration metric: took 311.217634ms to configureAuth
	I0829 19:35:47.057402   78865 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:47.057599   78865 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:35:47.057695   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.060274   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.060594   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.060620   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.060750   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.060976   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.061149   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.061311   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.061465   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:47.061676   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:47.061703   78865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:47.284836   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:47.284862   78865 machine.go:96] duration metric: took 890.004565ms to provisionDockerMachine
	I0829 19:35:47.284876   78865 start.go:293] postStartSetup for "no-preload-690795" (driver="kvm2")
	I0829 19:35:47.284889   78865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:47.284909   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.285207   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:47.285232   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.287875   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.288162   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.288180   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.288391   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.288597   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.288772   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.288899   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.372833   78865 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:47.376649   78865 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:47.376670   78865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:47.376729   78865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:47.376801   78865 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:47.376881   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:47.385721   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:47.407601   78865 start.go:296] duration metric: took 122.711153ms for postStartSetup
	I0829 19:35:47.407640   78865 fix.go:56] duration metric: took 19.620666095s for fixHost
	I0829 19:35:47.407673   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.410483   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.410873   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.410903   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.411139   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.411363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.411527   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.411674   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.411830   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:47.411987   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:47.412001   78865 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:47.518841   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960147.499237123
	
	I0829 19:35:47.518864   78865 fix.go:216] guest clock: 1724960147.499237123
	I0829 19:35:47.518872   78865 fix.go:229] Guest: 2024-08-29 19:35:47.499237123 +0000 UTC Remote: 2024-08-29 19:35:47.407643858 +0000 UTC m=+351.882891548 (delta=91.593265ms)
	I0829 19:35:47.518891   78865 fix.go:200] guest clock delta is within tolerance: 91.593265ms
	I0829 19:35:47.518896   78865 start.go:83] releasing machines lock for "no-preload-690795", held for 19.731957743s
	I0829 19:35:47.518914   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.519214   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:47.521738   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.522125   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.522153   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.522310   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.522806   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.523016   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.523082   78865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:47.523127   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.523209   78865 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:47.523225   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.526076   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526443   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.526462   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526489   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526681   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.526826   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.527005   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.527036   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.527073   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.527199   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.527197   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.527370   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.527537   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.527690   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.635450   78865 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:47.641274   78865 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:47.788805   78865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:47.794545   78865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:47.794601   78865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:47.810156   78865 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:47.810175   78865 start.go:495] detecting cgroup driver to use...
	I0829 19:35:47.810228   78865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:47.825795   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:47.839011   78865 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:47.839061   78865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:47.851854   78865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:47.864467   78865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:47.999155   78865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:48.143858   78865 docker.go:233] disabling docker service ...
	I0829 19:35:48.143921   78865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:48.157740   78865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:48.172067   78865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:48.339557   78865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:48.462950   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:48.475646   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:48.492262   78865 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:35:48.492329   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.501580   78865 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:48.501647   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.511241   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.520477   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.530413   78865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:48.540457   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.551258   78865 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.567365   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.577266   78865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:48.586423   78865 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:48.586479   78865 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:48.599527   78865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:48.608666   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:48.721808   78865 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:48.811417   78865 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:48.811495   78865 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:48.816689   78865 start.go:563] Will wait 60s for crictl version
	I0829 19:35:48.816750   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:48.820563   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:48.862786   78865 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:48.862869   78865 ssh_runner.go:195] Run: crio --version
	I0829 19:35:48.889834   78865 ssh_runner.go:195] Run: crio --version
	I0829 19:35:48.918515   78865 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:35:48.919643   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:48.922182   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:48.922530   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:48.922560   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:48.922725   78865 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:48.926877   78865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:48.939254   78865 kubeadm.go:883] updating cluster {Name:no-preload-690795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:48.939379   78865 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:35:48.939413   78865 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:48.972281   78865 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:35:48.972304   78865 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:48.972345   78865 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.972361   78865 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:48.972384   78865 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:48.972425   78865 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:48.972443   78865 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:48.972452   78865 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 19:35:48.972496   78865 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:48.972558   78865 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:48.973929   78865 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:48.973938   78865 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.973979   78865 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 19:35:48.973933   78865 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:48.973931   78865 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:48.973932   78865 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:48.973938   78865 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:48.973939   78865 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.229315   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 19:35:49.232334   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.271261   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.328903   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.339435   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.349057   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.356840   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.387705   78865 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 19:35:49.387748   78865 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 19:35:49.387760   78865 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.387777   78865 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.387808   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.387829   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.389731   78865 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 19:35:49.389769   78865 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.389809   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.438231   78865 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 19:35:49.438264   78865 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.438304   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.453177   78865 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 19:35:49.453220   78865 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.453270   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.455713   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.455767   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.455802   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.455804   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.455772   78865 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 19:35:49.455895   78865 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.455921   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.458141   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.539090   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.539125   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.568605   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.573575   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.573622   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.573575   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.678619   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.680581   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.680584   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.680671   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.699638   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.706556   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.803909   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.809759   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 19:35:49.809863   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.810356   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 19:35:49.810423   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:49.811234   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 19:35:49.811285   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:49.832040   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 19:35:49.832102   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 19:35:49.832153   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:49.832162   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:49.862517   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 19:35:49.862537   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.862578   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.862653   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 19:35:49.862696   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 19:35:49.862703   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 19:35:49.862731   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 19:35:49.862760   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 19:35:49.862788   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:35:50.192890   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.930928   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:50.931805   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:53.430716   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:48.764746   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.264755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.764703   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.264240   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.764284   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.265111   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.764316   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.264213   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.764295   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:53.264451   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.168967   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:52.169327   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:51.820978   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.958376621s)
	I0829 19:35:51.821014   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 19:35:51.821035   78865 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:51.821077   78865 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.958265625s)
	I0829 19:35:51.821109   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:51.821108   78865 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.62819044s)
	I0829 19:35:51.821211   78865 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 19:35:51.821243   78865 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:51.821275   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:51.821111   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 19:35:55.931182   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:58.431477   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:53.764946   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.265076   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.764273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.264844   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.764622   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.765120   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.265199   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.764610   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:58.264296   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.669752   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:56.670764   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:55.594240   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.773093303s)
	I0829 19:35:55.594275   78865 ssh_runner.go:235] Completed: which crictl: (3.77298113s)
	I0829 19:35:55.594290   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 19:35:55.594340   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:55.594348   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:55.594403   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:57.972145   78865 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.377784997s)
	I0829 19:35:57.972180   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.377757134s)
	I0829 19:35:57.972210   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 19:35:57.972223   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:57.972237   78865 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:57.972270   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:58.025853   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:59.843856   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.871560481s)
	I0829 19:35:59.843883   78865 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.818003416s)
	I0829 19:35:59.843887   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 19:35:59.843915   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 19:35:59.843925   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:59.844004   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:35:59.844019   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:59.849625   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 19:36:00.432638   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.078312   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:58.765060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.265033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.765033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.265144   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.764425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.764672   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.264962   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:03.264407   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.170365   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:01.668465   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.670347   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:01.294196   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.450154791s)
	I0829 19:36:01.294230   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 19:36:01.294273   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:36:01.294336   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:36:03.144937   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.850574318s)
	I0829 19:36:03.144978   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 19:36:03.145018   78865 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:36:03.145081   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:36:03.803763   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 19:36:03.803802   78865 cache_images.go:123] Successfully loaded all cached images
	I0829 19:36:03.803807   78865 cache_images.go:92] duration metric: took 14.831492974s to LoadCachedImages
	I0829 19:36:03.803818   78865 kubeadm.go:934] updating node { 192.168.39.76 8443 v1.31.0 crio true true} ...
	I0829 19:36:03.803927   78865 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-690795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:36:03.803988   78865 ssh_runner.go:195] Run: crio config
	I0829 19:36:03.854859   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:36:03.854879   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:36:03.854894   78865 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:36:03.854915   78865 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-690795 NodeName:no-preload-690795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:36:03.855055   78865 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-690795"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.76
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:36:03.855114   78865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:36:03.865163   78865 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:36:03.865236   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:36:03.874348   78865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0829 19:36:03.891540   78865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:36:03.908488   78865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0829 19:36:03.926440   78865 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0829 19:36:03.930270   78865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:36:03.942353   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:36:04.066646   78865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:36:04.083872   78865 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795 for IP: 192.168.39.76
	I0829 19:36:04.083901   78865 certs.go:194] generating shared ca certs ...
	I0829 19:36:04.083921   78865 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:36:04.084106   78865 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:36:04.084172   78865 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:36:04.084186   78865 certs.go:256] generating profile certs ...
	I0829 19:36:04.084307   78865 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/client.key
	I0829 19:36:04.084432   78865 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.key.8a2db174
	I0829 19:36:04.084492   78865 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.key
	I0829 19:36:04.084656   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:36:04.084705   78865 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:36:04.084718   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:36:04.084753   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:36:04.084790   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:36:04.084827   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:36:04.084883   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:36:04.085744   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:36:04.124689   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:36:04.158769   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:36:04.188748   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:36:04.217577   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:36:04.251166   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:36:04.282961   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:36:04.306431   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:36:04.329260   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:36:04.365050   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:36:04.393054   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:36:04.417384   78865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:36:04.434555   78865 ssh_runner.go:195] Run: openssl version
	I0829 19:36:04.440074   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:36:04.451378   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.455603   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.455655   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.461114   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:36:04.472522   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:36:04.483064   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.487316   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.487383   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.492860   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:36:04.504284   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:36:04.515522   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.519853   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.519908   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.525240   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:36:04.536612   78865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:36:04.540905   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:36:04.546622   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:36:04.552303   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:36:04.558306   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:36:04.564129   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:36:04.569635   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:36:04.575196   78865 kubeadm.go:392] StartCluster: {Name:no-preload-690795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:36:04.575279   78865 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:36:04.575360   78865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:36:04.619563   78865 cri.go:89] found id: ""
	I0829 19:36:04.619638   78865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:36:04.629655   78865 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:36:04.629675   78865 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:36:04.629785   78865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:36:04.638771   78865 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:36:04.639763   78865 kubeconfig.go:125] found "no-preload-690795" server: "https://192.168.39.76:8443"
	I0829 19:36:04.641783   78865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:36:04.650605   78865 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.76
	I0829 19:36:04.650634   78865 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:36:04.650644   78865 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:36:04.650693   78865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:36:04.685589   78865 cri.go:89] found id: ""
	I0829 19:36:04.685656   78865 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:36:04.702584   78865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:36:04.711693   78865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:36:04.711712   78865 kubeadm.go:157] found existing configuration files:
	
	I0829 19:36:04.711753   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:36:04.720291   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:36:04.720349   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:36:04.729301   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:36:04.739449   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:36:04.739513   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:36:04.748786   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:36:04.757128   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:36:04.757175   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:36:04.767533   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:36:04.777322   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:36:04.777373   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:36:04.786269   78865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:36:04.795387   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:04.904530   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:05.430803   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:07.431525   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.764403   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.764546   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.265205   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.764700   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.264837   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.764871   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.264506   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.765230   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:08.265050   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.169466   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:08.669719   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:05.750216   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:05.949551   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:06.043930   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:06.140396   78865 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:36:06.140505   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.641069   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.141458   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.161360   78865 api_server.go:72] duration metric: took 1.020963124s to wait for apiserver process to appear ...
	I0829 19:36:07.161390   78865 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:36:07.161426   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.327675   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:36:10.327707   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:36:10.327721   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.396704   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:10.396737   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:10.661699   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.666518   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:10.666544   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:11.162227   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:11.167736   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:11.167774   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:11.662428   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:11.668688   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:11.668727   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:12.162372   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:12.168297   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0829 19:36:12.175933   78865 api_server.go:141] control plane version: v1.31.0
	I0829 19:36:12.175956   78865 api_server.go:131] duration metric: took 5.014557664s to wait for apiserver health ...
	I0829 19:36:12.175967   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:36:12.175975   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:36:12.177903   78865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:36:09.930962   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:11.932180   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:08.764431   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.264876   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.764481   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.265100   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.764720   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.264283   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.764890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.264425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.764965   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:13.264557   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.669915   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:13.169150   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:12.179056   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:36:12.202639   78865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:36:12.221804   78865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:36:12.242859   78865 system_pods.go:59] 8 kube-system pods found
	I0829 19:36:12.242897   78865 system_pods.go:61] "coredns-6f6b679f8f-j8zzh" [01eaffa5-a976-441c-987c-bdf3b7f72cd6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:36:12.242905   78865 system_pods.go:61] "etcd-no-preload-690795" [df54ae59-44ff-4f7b-b6c0-6145bdae3e44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:36:12.242912   78865 system_pods.go:61] "kube-apiserver-no-preload-690795" [aee247f2-1381-4571-a671-2cf140c78196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:36:12.242919   78865 system_pods.go:61] "kube-controller-manager-no-preload-690795" [69244a85-2778-46c8-a95c-d0f8a264c0cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:36:12.242923   78865 system_pods.go:61] "kube-proxy-q4mbt" [985478f9-235d-4922-a7fd-a0cbdddf3f68] Running
	I0829 19:36:12.242934   78865 system_pods.go:61] "kube-scheduler-no-preload-690795" [e1e141ab-eb79-4c87-bccd-274f1e7495b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:36:12.242940   78865 system_pods.go:61] "metrics-server-6867b74b74-svnwn" [e096a3dc-1166-4ee3-9f3f-e044064a5a13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:36:12.242945   78865 system_pods.go:61] "storage-provisioner" [6fc868fa-2221-45ad-903e-cd3d2297a3e6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:36:12.242952   78865 system_pods.go:74] duration metric: took 21.125083ms to wait for pod list to return data ...
	I0829 19:36:12.242962   78865 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:36:12.253567   78865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:36:12.253598   78865 node_conditions.go:123] node cpu capacity is 2
	I0829 19:36:12.253612   78865 node_conditions.go:105] duration metric: took 10.645029ms to run NodePressure ...
	I0829 19:36:12.253634   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:12.514683   78865 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:36:12.520060   78865 kubeadm.go:739] kubelet initialised
	I0829 19:36:12.520082   78865 kubeadm.go:740] duration metric: took 5.371928ms waiting for restarted kubelet to initialise ...
	I0829 19:36:12.520088   78865 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:36:12.524795   78865 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:14.533484   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:14.430676   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:16.930723   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:13.765038   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.264547   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.764878   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.264485   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.765114   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.264694   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.764599   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.264540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.764523   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:18.264855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.668846   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:17.669308   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:17.031326   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:19.530568   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:19.430550   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:21.431080   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:23.431736   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:18.764781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.264280   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.764653   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.264908   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.764855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.265180   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.764470   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.264751   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.765034   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:23.264498   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.168590   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:22.168898   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:21.531983   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:22.032162   78865 pod_ready.go:93] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:22.032187   78865 pod_ready.go:82] duration metric: took 9.507358099s for pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:22.032200   78865 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.038935   78865 pod_ready.go:93] pod "etcd-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.038956   78865 pod_ready.go:82] duration metric: took 1.006750868s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.038966   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.043258   78865 pod_ready.go:93] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.043278   78865 pod_ready.go:82] duration metric: took 4.305789ms for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.043298   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.049140   78865 pod_ready.go:93] pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.049159   78865 pod_ready.go:82] duration metric: took 5.852855ms for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.049170   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-q4mbt" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.055033   78865 pod_ready.go:93] pod "kube-proxy-q4mbt" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.055054   78865 pod_ready.go:82] duration metric: took 5.87681ms for pod "kube-proxy-q4mbt" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.055067   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.229706   78865 pod_ready.go:93] pod "kube-scheduler-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.229734   78865 pod_ready.go:82] duration metric: took 174.6598ms for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.229748   78865 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:25.237141   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:25.930818   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.430312   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:23.764384   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.265090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.765183   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.264966   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.764429   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.264774   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.765090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.264524   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.764810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:28.264541   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.169024   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:26.169599   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.668840   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:27.736899   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:30.235632   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:30.430611   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.930362   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.764771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.764735   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.265228   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.764328   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.264312   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.764627   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.264891   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.765104   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:33.264462   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.669561   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.671106   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.236488   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:34.736240   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:34.931264   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:37.430665   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:33.764540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.265004   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.764934   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.264439   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.764982   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.264780   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.765081   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.264865   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.764612   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:37.764705   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:37.803674   79869 cri.go:89] found id: ""
	I0829 19:36:37.803704   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.803715   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:37.803724   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:37.803783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:37.836465   79869 cri.go:89] found id: ""
	I0829 19:36:37.836494   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.836504   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:37.836512   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:37.836574   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:37.870224   79869 cri.go:89] found id: ""
	I0829 19:36:37.870248   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.870256   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:37.870262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:37.870326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:37.904152   79869 cri.go:89] found id: ""
	I0829 19:36:37.904179   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.904187   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:37.904194   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:37.904267   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:37.939182   79869 cri.go:89] found id: ""
	I0829 19:36:37.939211   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.939220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:37.939228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:37.939293   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:37.975761   79869 cri.go:89] found id: ""
	I0829 19:36:37.975790   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.975800   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:37.975808   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:37.975910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:38.008407   79869 cri.go:89] found id: ""
	I0829 19:36:38.008430   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.008437   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:38.008444   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:38.008497   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:38.041327   79869 cri.go:89] found id: ""
	I0829 19:36:38.041360   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.041370   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:38.041381   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:38.041395   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:38.091167   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:38.091214   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:38.105093   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:38.105126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:38.227564   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:38.227599   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:38.227616   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:38.298287   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:38.298327   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:35.172336   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:37.671072   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:36.736855   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:38.736902   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:39.929907   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:41.930998   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:40.836221   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:40.849288   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:40.849357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:40.882705   79869 cri.go:89] found id: ""
	I0829 19:36:40.882732   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.882739   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:40.882745   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:40.882791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:40.917639   79869 cri.go:89] found id: ""
	I0829 19:36:40.917667   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.917679   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:40.917687   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:40.917738   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:40.953804   79869 cri.go:89] found id: ""
	I0829 19:36:40.953843   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.953854   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:40.953863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:40.953925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:40.987341   79869 cri.go:89] found id: ""
	I0829 19:36:40.987376   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.987388   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:40.987396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:40.987462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:41.026247   79869 cri.go:89] found id: ""
	I0829 19:36:41.026277   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.026290   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:41.026303   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:41.026372   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:41.064160   79869 cri.go:89] found id: ""
	I0829 19:36:41.064185   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.064194   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:41.064201   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:41.064278   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:41.115081   79869 cri.go:89] found id: ""
	I0829 19:36:41.115113   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.115124   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:41.115131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:41.115206   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:41.165472   79869 cri.go:89] found id: ""
	I0829 19:36:41.165501   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.165511   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:41.165521   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:41.165536   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:41.219322   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:41.219357   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:41.232410   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:41.232443   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:41.296216   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:41.296235   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:41.296246   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:41.375784   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:41.375824   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:40.169548   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:42.672996   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:41.236777   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.736150   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.931489   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:45.933439   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:48.431152   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.914181   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:43.926643   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:43.926716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:43.963266   79869 cri.go:89] found id: ""
	I0829 19:36:43.963289   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.963297   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:43.963303   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:43.963350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:43.998886   79869 cri.go:89] found id: ""
	I0829 19:36:43.998917   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.998926   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:43.998930   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:43.998975   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:44.033142   79869 cri.go:89] found id: ""
	I0829 19:36:44.033174   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.033183   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:44.033189   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:44.033244   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:44.066986   79869 cri.go:89] found id: ""
	I0829 19:36:44.067019   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.067031   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:44.067038   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:44.067106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:44.100228   79869 cri.go:89] found id: ""
	I0829 19:36:44.100261   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.100272   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:44.100279   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:44.100340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:44.134511   79869 cri.go:89] found id: ""
	I0829 19:36:44.134536   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.134543   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:44.134549   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:44.134615   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:44.170586   79869 cri.go:89] found id: ""
	I0829 19:36:44.170619   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.170631   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:44.170639   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:44.170692   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:44.205349   79869 cri.go:89] found id: ""
	I0829 19:36:44.205377   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.205388   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:44.205398   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:44.205413   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:44.218874   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:44.218903   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:44.294221   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:44.294241   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:44.294253   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:44.373258   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:44.373293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:44.414355   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:44.414384   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:46.964371   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:46.976756   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:46.976827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:47.009512   79869 cri.go:89] found id: ""
	I0829 19:36:47.009537   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.009547   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:47.009555   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:47.009608   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:47.042141   79869 cri.go:89] found id: ""
	I0829 19:36:47.042177   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.042190   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:47.042199   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:47.042265   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:47.074680   79869 cri.go:89] found id: ""
	I0829 19:36:47.074707   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.074718   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:47.074726   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:47.074783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:47.107014   79869 cri.go:89] found id: ""
	I0829 19:36:47.107042   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.107051   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:47.107059   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:47.107107   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:47.139770   79869 cri.go:89] found id: ""
	I0829 19:36:47.139795   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.139804   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:47.139810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:47.139862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:47.174463   79869 cri.go:89] found id: ""
	I0829 19:36:47.174502   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.174521   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:47.174532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:47.174580   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:47.206935   79869 cri.go:89] found id: ""
	I0829 19:36:47.206958   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.206966   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:47.206972   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:47.207035   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:47.250798   79869 cri.go:89] found id: ""
	I0829 19:36:47.250822   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.250829   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:47.250836   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:47.250847   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:47.320803   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:47.320824   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:47.320850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:47.394344   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:47.394379   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:47.439451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:47.439481   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:47.491070   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:47.491106   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:45.169686   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:47.169784   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:46.236187   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:48.736605   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.431543   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:52.931361   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.006196   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:50.020169   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:50.020259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:50.059323   79869 cri.go:89] found id: ""
	I0829 19:36:50.059353   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.059373   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:50.059380   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:50.059442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:50.095389   79869 cri.go:89] found id: ""
	I0829 19:36:50.095419   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.095430   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:50.095437   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:50.095499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:50.128133   79869 cri.go:89] found id: ""
	I0829 19:36:50.128162   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.128173   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:50.128180   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:50.128238   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:50.160999   79869 cri.go:89] found id: ""
	I0829 19:36:50.161021   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.161030   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:50.161035   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:50.161081   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:50.195246   79869 cri.go:89] found id: ""
	I0829 19:36:50.195268   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.195276   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:50.195282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:50.195329   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:50.229232   79869 cri.go:89] found id: ""
	I0829 19:36:50.229263   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.229273   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:50.229280   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:50.229340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:50.265141   79869 cri.go:89] found id: ""
	I0829 19:36:50.265169   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.265180   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:50.265188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:50.265251   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:50.299896   79869 cri.go:89] found id: ""
	I0829 19:36:50.299928   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.299940   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:50.299949   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:50.299963   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:50.313408   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:50.313431   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:50.382019   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:50.382037   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:50.382049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:50.462174   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:50.462211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:50.499944   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:50.499971   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.050299   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:53.064866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:53.064963   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:53.098468   79869 cri.go:89] found id: ""
	I0829 19:36:53.098492   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.098500   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:53.098506   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:53.098555   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:53.130323   79869 cri.go:89] found id: ""
	I0829 19:36:53.130354   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.130377   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:53.130385   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:53.130445   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:53.175911   79869 cri.go:89] found id: ""
	I0829 19:36:53.175941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.175951   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:53.175968   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:53.176033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:53.209834   79869 cri.go:89] found id: ""
	I0829 19:36:53.209865   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.209874   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:53.209881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:53.209959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:53.246277   79869 cri.go:89] found id: ""
	I0829 19:36:53.246322   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.246332   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:53.246340   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:53.246401   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:53.283911   79869 cri.go:89] found id: ""
	I0829 19:36:53.283941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.283953   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:53.283962   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:53.284024   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:53.315217   79869 cri.go:89] found id: ""
	I0829 19:36:53.315247   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.315257   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:53.315265   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:53.315328   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:53.348341   79869 cri.go:89] found id: ""
	I0829 19:36:53.348392   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.348405   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:53.348417   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:53.348436   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.399841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:53.399879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:53.414453   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:53.414491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:53.490003   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:53.490023   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:53.490042   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:53.565162   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:53.565198   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:49.669984   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:52.168756   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.736642   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:53.236282   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:55.430710   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:57.430791   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:56.106051   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:56.119263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:56.119345   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:56.160104   79869 cri.go:89] found id: ""
	I0829 19:36:56.160131   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.160138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:56.160144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:56.160192   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:56.196028   79869 cri.go:89] found id: ""
	I0829 19:36:56.196054   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.196062   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:56.196067   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:56.196113   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:56.229503   79869 cri.go:89] found id: ""
	I0829 19:36:56.229532   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.229539   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:56.229553   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:56.229602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:56.263904   79869 cri.go:89] found id: ""
	I0829 19:36:56.263934   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.263944   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:56.263951   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:56.264013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:56.295579   79869 cri.go:89] found id: ""
	I0829 19:36:56.295607   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.295618   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:56.295625   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:56.295680   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:56.328514   79869 cri.go:89] found id: ""
	I0829 19:36:56.328548   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.328556   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:56.328563   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:56.328620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:56.361388   79869 cri.go:89] found id: ""
	I0829 19:36:56.361418   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.361426   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:56.361431   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:56.361508   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:56.393312   79869 cri.go:89] found id: ""
	I0829 19:36:56.393345   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.393354   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:56.393362   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:56.393372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:56.446431   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:56.446472   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:56.459086   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:56.459112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:56.525526   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:56.525554   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:56.525569   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:56.609554   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:56.609592   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:54.169625   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:56.169688   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:58.170249   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:55.736101   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:58.235887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:00.236133   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:59.931992   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.430785   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:59.148291   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:59.162462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:59.162524   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:59.199732   79869 cri.go:89] found id: ""
	I0829 19:36:59.199761   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.199771   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:59.199780   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:59.199861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:59.232285   79869 cri.go:89] found id: ""
	I0829 19:36:59.232324   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.232335   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:59.232345   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:59.232415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:59.266424   79869 cri.go:89] found id: ""
	I0829 19:36:59.266452   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.266463   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:59.266471   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:59.266536   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:59.306707   79869 cri.go:89] found id: ""
	I0829 19:36:59.306733   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.306742   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:59.306748   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:59.306807   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:59.345114   79869 cri.go:89] found id: ""
	I0829 19:36:59.345144   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.345154   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:59.345162   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:59.345225   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:59.382940   79869 cri.go:89] found id: ""
	I0829 19:36:59.382963   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.382971   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:59.382977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:59.383031   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:59.420066   79869 cri.go:89] found id: ""
	I0829 19:36:59.420088   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.420095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:59.420101   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:59.420146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:59.457355   79869 cri.go:89] found id: ""
	I0829 19:36:59.457377   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.457385   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:59.457392   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:59.457409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:59.528868   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:59.528893   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:59.528908   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:59.612849   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:59.612886   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:59.649036   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:59.649064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:59.703071   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:59.703105   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.216020   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:02.229270   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:02.229351   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:02.266857   79869 cri.go:89] found id: ""
	I0829 19:37:02.266885   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.266897   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:02.266904   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:02.266967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:02.304473   79869 cri.go:89] found id: ""
	I0829 19:37:02.304501   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.304512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:02.304520   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:02.304590   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:02.338219   79869 cri.go:89] found id: ""
	I0829 19:37:02.338244   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.338253   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:02.338261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:02.338323   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:02.370974   79869 cri.go:89] found id: ""
	I0829 19:37:02.371006   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.371017   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:02.371025   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:02.371084   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:02.405871   79869 cri.go:89] found id: ""
	I0829 19:37:02.405895   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.405902   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:02.405908   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:02.405955   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:02.438516   79869 cri.go:89] found id: ""
	I0829 19:37:02.438543   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.438554   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:02.438568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:02.438630   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:02.471180   79869 cri.go:89] found id: ""
	I0829 19:37:02.471205   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.471213   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:02.471218   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:02.471276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:02.503203   79869 cri.go:89] found id: ""
	I0829 19:37:02.503227   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.503237   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:02.503248   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:02.503262   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:02.555303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:02.555337   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.567903   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:02.567927   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:02.641377   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:02.641403   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:02.641418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:02.717475   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:02.717522   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:00.669482   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.669691   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.237155   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:04.237334   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:04.431033   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:06.431419   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.431901   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:05.257326   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:05.270641   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:05.270717   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:05.303873   79869 cri.go:89] found id: ""
	I0829 19:37:05.303901   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.303909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:05.303915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:05.303959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:05.345153   79869 cri.go:89] found id: ""
	I0829 19:37:05.345176   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.345184   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:05.345189   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:05.345245   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:05.379032   79869 cri.go:89] found id: ""
	I0829 19:37:05.379059   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.379067   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:05.379073   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:05.379135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:05.412432   79869 cri.go:89] found id: ""
	I0829 19:37:05.412465   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.412476   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:05.412484   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:05.412538   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:05.445441   79869 cri.go:89] found id: ""
	I0829 19:37:05.445464   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.445471   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:05.445477   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:05.445527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:05.478921   79869 cri.go:89] found id: ""
	I0829 19:37:05.478949   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.478957   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:05.478964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:05.479011   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:05.509821   79869 cri.go:89] found id: ""
	I0829 19:37:05.509849   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.509859   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:05.509866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:05.509924   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:05.541409   79869 cri.go:89] found id: ""
	I0829 19:37:05.541435   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.541443   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:05.541451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:05.541464   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.590569   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:05.590601   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:05.604071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:05.604101   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:05.685233   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:05.685262   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:05.685277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:05.761082   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:05.761112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.299816   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:08.312964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:08.313037   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:08.344710   79869 cri.go:89] found id: ""
	I0829 19:37:08.344737   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.344745   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:08.344755   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:08.344820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:08.378185   79869 cri.go:89] found id: ""
	I0829 19:37:08.378210   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.378217   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:08.378223   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:08.378272   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:08.410619   79869 cri.go:89] found id: ""
	I0829 19:37:08.410645   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.410663   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:08.410670   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:08.410729   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:08.445494   79869 cri.go:89] found id: ""
	I0829 19:37:08.445522   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.445531   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:08.445540   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:08.445601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:08.478225   79869 cri.go:89] found id: ""
	I0829 19:37:08.478249   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.478258   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:08.478263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:08.478311   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:08.512006   79869 cri.go:89] found id: ""
	I0829 19:37:08.512032   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.512042   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:08.512049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:08.512111   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:08.546800   79869 cri.go:89] found id: ""
	I0829 19:37:08.546831   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.546841   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:08.546848   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:08.546911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:08.580353   79869 cri.go:89] found id: ""
	I0829 19:37:08.580383   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.580394   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:08.580405   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:08.580418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:08.661004   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:08.661041   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.708548   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:08.708581   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.168832   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:07.669695   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:06.736029   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.736415   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:10.930895   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:13.430209   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.761385   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:08.761418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:08.774365   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:08.774392   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:08.839864   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.340781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:11.353417   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:11.353492   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:11.388836   79869 cri.go:89] found id: ""
	I0829 19:37:11.388864   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.388873   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:11.388879   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:11.388925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:11.429655   79869 cri.go:89] found id: ""
	I0829 19:37:11.429685   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.429695   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:11.429703   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:11.429761   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:11.462122   79869 cri.go:89] found id: ""
	I0829 19:37:11.462157   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.462166   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:11.462174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:11.462236   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:11.495955   79869 cri.go:89] found id: ""
	I0829 19:37:11.495985   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.495996   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:11.496003   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:11.496063   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:11.529394   79869 cri.go:89] found id: ""
	I0829 19:37:11.529427   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.529438   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:11.529446   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:11.529513   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:11.565804   79869 cri.go:89] found id: ""
	I0829 19:37:11.565830   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.565838   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:11.565844   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:11.565903   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:11.601786   79869 cri.go:89] found id: ""
	I0829 19:37:11.601815   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.601825   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:11.601832   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:11.601889   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:11.638213   79869 cri.go:89] found id: ""
	I0829 19:37:11.638234   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.638242   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:11.638250   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:11.638260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:11.651085   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:11.651113   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:11.716834   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.716858   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:11.716872   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:11.804266   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:11.804310   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:11.846655   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:11.846684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:10.168947   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:12.669439   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:11.236100   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:13.236138   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:15.930954   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.931355   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:14.408512   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:14.420973   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:14.421033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:14.456516   79869 cri.go:89] found id: ""
	I0829 19:37:14.456540   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.456548   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:14.456553   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:14.456604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:14.489480   79869 cri.go:89] found id: ""
	I0829 19:37:14.489502   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.489512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:14.489517   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:14.489562   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:14.521821   79869 cri.go:89] found id: ""
	I0829 19:37:14.521849   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.521857   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:14.521863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:14.521911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:14.557084   79869 cri.go:89] found id: ""
	I0829 19:37:14.557116   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.557125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:14.557131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:14.557180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:14.590979   79869 cri.go:89] found id: ""
	I0829 19:37:14.591009   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.591019   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:14.591027   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:14.591088   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:14.624022   79869 cri.go:89] found id: ""
	I0829 19:37:14.624047   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.624057   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:14.624066   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:14.624131   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:14.656100   79869 cri.go:89] found id: ""
	I0829 19:37:14.656133   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.656145   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:14.656153   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:14.656214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:14.694241   79869 cri.go:89] found id: ""
	I0829 19:37:14.694276   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.694289   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:14.694302   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:14.694317   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.748276   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:14.748312   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:14.761340   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:14.761361   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:14.834815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:14.834842   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:14.834857   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:14.909857   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:14.909898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.453264   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:17.466704   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:17.466776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:17.500163   79869 cri.go:89] found id: ""
	I0829 19:37:17.500193   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.500205   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:17.500212   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:17.500269   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:17.532155   79869 cri.go:89] found id: ""
	I0829 19:37:17.532182   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.532192   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:17.532200   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:17.532259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:17.564710   79869 cri.go:89] found id: ""
	I0829 19:37:17.564737   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.564747   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:17.564754   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:17.564816   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:17.597056   79869 cri.go:89] found id: ""
	I0829 19:37:17.597091   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.597103   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:17.597111   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:17.597173   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:17.633398   79869 cri.go:89] found id: ""
	I0829 19:37:17.633424   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.633434   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:17.633442   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:17.633506   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:17.666201   79869 cri.go:89] found id: ""
	I0829 19:37:17.666243   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.666254   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:17.666262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:17.666324   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:17.700235   79869 cri.go:89] found id: ""
	I0829 19:37:17.700259   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.700266   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:17.700273   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:17.700320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:17.732060   79869 cri.go:89] found id: ""
	I0829 19:37:17.732090   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.732100   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:17.732110   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:17.732126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:17.747071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:17.747107   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:17.816644   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:17.816665   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:17.816677   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:17.895084   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:17.895134   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.935093   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:17.935125   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.669895   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.170115   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:15.736101   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.736304   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:19.736492   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:20.429878   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:22.430233   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:20.484693   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:20.497977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:20.498043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:20.531361   79869 cri.go:89] found id: ""
	I0829 19:37:20.531389   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.531400   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:20.531408   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:20.531469   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:20.569556   79869 cri.go:89] found id: ""
	I0829 19:37:20.569583   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.569594   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:20.569603   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:20.569668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:20.602350   79869 cri.go:89] found id: ""
	I0829 19:37:20.602377   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.602385   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:20.602391   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:20.602448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:20.637274   79869 cri.go:89] found id: ""
	I0829 19:37:20.637305   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.637319   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:20.637327   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:20.637388   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:20.686169   79869 cri.go:89] found id: ""
	I0829 19:37:20.686196   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.686204   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:20.686210   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:20.686257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:20.722745   79869 cri.go:89] found id: ""
	I0829 19:37:20.722775   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.722786   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:20.722794   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:20.722856   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:20.757314   79869 cri.go:89] found id: ""
	I0829 19:37:20.757337   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.757344   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:20.757349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:20.757398   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:20.790802   79869 cri.go:89] found id: ""
	I0829 19:37:20.790834   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.790844   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:20.790855   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:20.790870   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:20.840866   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:20.840898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:20.854053   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:20.854098   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:20.921717   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:20.921746   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:20.921761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:21.003362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:21.003398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:23.541356   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:23.554621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:23.554699   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:23.588155   79869 cri.go:89] found id: ""
	I0829 19:37:23.588190   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.588199   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:23.588207   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:23.588273   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:23.622917   79869 cri.go:89] found id: ""
	I0829 19:37:23.622945   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.622954   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:23.622960   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:23.623016   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:23.658615   79869 cri.go:89] found id: ""
	I0829 19:37:23.658648   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.658657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:23.658663   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:23.658720   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:23.693196   79869 cri.go:89] found id: ""
	I0829 19:37:23.693224   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.693234   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:23.693242   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:23.693309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:23.728285   79869 cri.go:89] found id: ""
	I0829 19:37:23.728317   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.728328   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:23.728336   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:23.728399   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:19.668651   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:21.669949   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:23.670402   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:22.235749   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:24.236078   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:24.431492   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:26.930440   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:23.763713   79869 cri.go:89] found id: ""
	I0829 19:37:23.763741   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.763751   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:23.763759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:23.763812   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:23.797776   79869 cri.go:89] found id: ""
	I0829 19:37:23.797801   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.797809   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:23.797814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:23.797863   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:23.832108   79869 cri.go:89] found id: ""
	I0829 19:37:23.832139   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.832151   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:23.832161   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:23.832175   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:23.880460   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:23.880490   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:23.893251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:23.893280   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:23.962079   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:23.962127   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:23.962140   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:24.048048   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:24.048088   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:26.593169   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:26.606349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:26.606426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:26.643119   79869 cri.go:89] found id: ""
	I0829 19:37:26.643143   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.643155   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:26.643161   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:26.643216   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:26.681555   79869 cri.go:89] found id: ""
	I0829 19:37:26.681579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.681591   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:26.681597   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:26.681655   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:26.718440   79869 cri.go:89] found id: ""
	I0829 19:37:26.718469   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.718479   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:26.718486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:26.718549   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:26.755249   79869 cri.go:89] found id: ""
	I0829 19:37:26.755274   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.755284   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:26.755292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:26.755356   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:26.790554   79869 cri.go:89] found id: ""
	I0829 19:37:26.790579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.790590   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:26.790597   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:26.790665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:26.826492   79869 cri.go:89] found id: ""
	I0829 19:37:26.826521   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.826530   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:26.826537   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:26.826600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:26.863456   79869 cri.go:89] found id: ""
	I0829 19:37:26.863487   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.863499   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:26.863508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:26.863579   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:26.897637   79869 cri.go:89] found id: ""
	I0829 19:37:26.897670   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.897683   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:26.897694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:26.897709   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:26.978362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:26.978400   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:27.016212   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:27.016245   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:27.078350   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:27.078386   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:27.101701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:27.101744   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:27.186720   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:26.168605   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:28.170938   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:26.735518   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:28.737503   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:29.431222   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:31.931202   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:29.686902   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:29.699814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:29.699885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:29.733867   79869 cri.go:89] found id: ""
	I0829 19:37:29.733893   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.733904   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:29.733911   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:29.733970   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:29.767910   79869 cri.go:89] found id: ""
	I0829 19:37:29.767937   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.767946   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:29.767952   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:29.767998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:29.801085   79869 cri.go:89] found id: ""
	I0829 19:37:29.801109   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.801117   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:29.801122   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:29.801166   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:29.834215   79869 cri.go:89] found id: ""
	I0829 19:37:29.834238   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.834246   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:29.834251   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:29.834307   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:29.872761   79869 cri.go:89] found id: ""
	I0829 19:37:29.872785   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.872793   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:29.872803   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:29.872847   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:29.909354   79869 cri.go:89] found id: ""
	I0829 19:37:29.909385   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.909395   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:29.909408   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:29.909468   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:29.941359   79869 cri.go:89] found id: ""
	I0829 19:37:29.941383   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.941390   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:29.941396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:29.941451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:29.973694   79869 cri.go:89] found id: ""
	I0829 19:37:29.973726   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.973736   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:29.973746   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:29.973761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:30.024863   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:30.024896   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.039092   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:30.039119   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:30.106106   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:30.106128   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:30.106143   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:30.183254   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:30.183289   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:32.722665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:32.736188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:32.736261   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:32.773039   79869 cri.go:89] found id: ""
	I0829 19:37:32.773065   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.773073   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:32.773082   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:32.773144   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:32.818204   79869 cri.go:89] found id: ""
	I0829 19:37:32.818234   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.818245   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:32.818252   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:32.818313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:32.862902   79869 cri.go:89] found id: ""
	I0829 19:37:32.862932   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.862942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:32.862949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:32.863009   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:32.908338   79869 cri.go:89] found id: ""
	I0829 19:37:32.908369   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.908380   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:32.908388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:32.908452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:32.941717   79869 cri.go:89] found id: ""
	I0829 19:37:32.941746   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.941757   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:32.941765   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:32.941827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:32.975777   79869 cri.go:89] found id: ""
	I0829 19:37:32.975806   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.975818   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:32.975827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:32.975885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:33.007518   79869 cri.go:89] found id: ""
	I0829 19:37:33.007551   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.007563   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:33.007570   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:33.007638   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:33.039902   79869 cri.go:89] found id: ""
	I0829 19:37:33.039924   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.039931   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:33.039946   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:33.039958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:33.111691   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:33.111720   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:33.111734   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:33.191036   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:33.191067   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:33.228850   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:33.228882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:33.282314   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:33.282351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.668490   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:32.669630   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:31.235788   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:33.735661   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:33.931996   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.932964   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:38.429817   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.796597   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:35.809357   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:35.809437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:35.841747   79869 cri.go:89] found id: ""
	I0829 19:37:35.841774   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.841783   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:35.841792   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:35.841850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:35.875614   79869 cri.go:89] found id: ""
	I0829 19:37:35.875639   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.875650   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:35.875657   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:35.875718   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:35.910547   79869 cri.go:89] found id: ""
	I0829 19:37:35.910571   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.910579   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:35.910585   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:35.910647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:35.949505   79869 cri.go:89] found id: ""
	I0829 19:37:35.949526   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.949533   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:35.949538   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:35.949583   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:35.984331   79869 cri.go:89] found id: ""
	I0829 19:37:35.984369   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.984381   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:35.984388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:35.984451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:36.018870   79869 cri.go:89] found id: ""
	I0829 19:37:36.018897   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.018909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:36.018917   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:36.018976   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:36.053557   79869 cri.go:89] found id: ""
	I0829 19:37:36.053593   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.053603   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:36.053611   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:36.053668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:36.087217   79869 cri.go:89] found id: ""
	I0829 19:37:36.087243   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.087254   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:36.087264   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:36.087282   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:36.141546   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:36.141577   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:36.155496   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:36.155524   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:36.225014   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:36.225038   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:36.225052   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:36.304399   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:36.304442   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:35.168843   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:37.169415   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.736103   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:37.736554   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:40.235995   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:40.430698   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.430836   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:38.842368   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:38.856085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:38.856160   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:38.893989   79869 cri.go:89] found id: ""
	I0829 19:37:38.894016   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.894024   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:38.894030   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:38.894075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:38.926756   79869 cri.go:89] found id: ""
	I0829 19:37:38.926784   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.926792   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:38.926798   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:38.926859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:38.966346   79869 cri.go:89] found id: ""
	I0829 19:37:38.966370   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.966379   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:38.966385   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:38.966442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:39.000266   79869 cri.go:89] found id: ""
	I0829 19:37:39.000291   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.000298   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:39.000307   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:39.000355   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:39.037243   79869 cri.go:89] found id: ""
	I0829 19:37:39.037269   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.037277   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:39.037282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:39.037347   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:39.068823   79869 cri.go:89] found id: ""
	I0829 19:37:39.068852   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.068864   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:39.068872   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:39.068936   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:39.099649   79869 cri.go:89] found id: ""
	I0829 19:37:39.099674   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.099682   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:39.099689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:39.099748   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:39.131764   79869 cri.go:89] found id: ""
	I0829 19:37:39.131786   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.131794   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:39.131802   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:39.131814   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:39.188087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:39.188123   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:39.200989   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:39.201015   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:39.279230   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:39.279257   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:39.279271   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:39.358667   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:39.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:41.897833   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:41.911145   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:41.911219   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:41.947096   79869 cri.go:89] found id: ""
	I0829 19:37:41.947122   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.947133   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:41.947141   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:41.947203   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:41.984267   79869 cri.go:89] found id: ""
	I0829 19:37:41.984301   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.984309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:41.984315   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:41.984384   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:42.018170   79869 cri.go:89] found id: ""
	I0829 19:37:42.018198   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.018209   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:42.018217   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:42.018281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:42.058245   79869 cri.go:89] found id: ""
	I0829 19:37:42.058269   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.058278   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:42.058283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:42.058327   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:42.093182   79869 cri.go:89] found id: ""
	I0829 19:37:42.093214   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.093226   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:42.093233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:42.093299   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:42.126013   79869 cri.go:89] found id: ""
	I0829 19:37:42.126041   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.126050   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:42.126058   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:42.126136   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:42.166568   79869 cri.go:89] found id: ""
	I0829 19:37:42.166660   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.166675   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:42.166683   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:42.166763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:42.204904   79869 cri.go:89] found id: ""
	I0829 19:37:42.204930   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.204938   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:42.204947   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:42.204960   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:42.262487   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:42.262533   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:42.275703   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:42.275730   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:42.341375   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:42.341394   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:42.341408   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:42.420981   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:42.421021   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:39.670059   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.169724   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.237785   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.736417   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.929743   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:46.930603   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.965267   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:44.979151   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:44.979204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:45.020423   79869 cri.go:89] found id: ""
	I0829 19:37:45.020448   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.020456   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:45.020461   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:45.020521   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:45.058200   79869 cri.go:89] found id: ""
	I0829 19:37:45.058225   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.058233   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:45.058238   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:45.058286   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:45.093886   79869 cri.go:89] found id: ""
	I0829 19:37:45.093909   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.093917   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:45.093923   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:45.093968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:45.127630   79869 cri.go:89] found id: ""
	I0829 19:37:45.127663   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.127674   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:45.127681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:45.127742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:45.160643   79869 cri.go:89] found id: ""
	I0829 19:37:45.160669   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.160679   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:45.160685   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:45.160742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:45.196010   79869 cri.go:89] found id: ""
	I0829 19:37:45.196035   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.196043   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:45.196050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:45.196101   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:45.229297   79869 cri.go:89] found id: ""
	I0829 19:37:45.229375   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.229395   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:45.229405   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:45.229461   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:45.267244   79869 cri.go:89] found id: ""
	I0829 19:37:45.267271   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.267281   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:45.267292   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:45.267306   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:45.280179   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:45.280201   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:45.352318   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:45.352339   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:45.352351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:45.432702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:45.432732   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:45.470540   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:45.470564   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.019771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:48.032745   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:48.032819   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:48.066895   79869 cri.go:89] found id: ""
	I0829 19:37:48.066921   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.066930   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:48.066938   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:48.066998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:48.104824   79869 cri.go:89] found id: ""
	I0829 19:37:48.104853   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.104861   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:48.104866   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:48.104931   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:48.140964   79869 cri.go:89] found id: ""
	I0829 19:37:48.140990   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.140998   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:48.141004   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:48.141051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:48.174550   79869 cri.go:89] found id: ""
	I0829 19:37:48.174578   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.174587   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:48.174593   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:48.174647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:48.207397   79869 cri.go:89] found id: ""
	I0829 19:37:48.207422   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.207430   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:48.207437   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:48.207495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:48.240948   79869 cri.go:89] found id: ""
	I0829 19:37:48.240970   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.240978   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:48.240983   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:48.241027   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:48.281058   79869 cri.go:89] found id: ""
	I0829 19:37:48.281087   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.281095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:48.281100   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:48.281151   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:48.315511   79869 cri.go:89] found id: ""
	I0829 19:37:48.315541   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.315552   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:48.315564   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:48.315580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.367680   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:48.367714   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:48.380251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:48.380285   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:48.449432   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:48.449452   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:48.449467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:48.525529   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:48.525563   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:44.669068   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:47.169440   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:46.737461   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:49.236079   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:49.431026   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.931134   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.064580   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:51.077351   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:51.077430   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:51.110018   79869 cri.go:89] found id: ""
	I0829 19:37:51.110049   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.110058   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:51.110063   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:51.110138   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:51.143667   79869 cri.go:89] found id: ""
	I0829 19:37:51.143700   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.143711   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:51.143719   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:51.143791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:51.178193   79869 cri.go:89] found id: ""
	I0829 19:37:51.178221   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.178229   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:51.178235   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:51.178285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:51.212323   79869 cri.go:89] found id: ""
	I0829 19:37:51.212352   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.212359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:51.212366   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:51.212413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:51.245724   79869 cri.go:89] found id: ""
	I0829 19:37:51.245745   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.245752   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:51.245758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:51.245832   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:51.278424   79869 cri.go:89] found id: ""
	I0829 19:37:51.278448   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.278456   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:51.278462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:51.278509   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:51.309469   79869 cri.go:89] found id: ""
	I0829 19:37:51.309498   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.309508   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:51.309516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:51.309602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:51.342596   79869 cri.go:89] found id: ""
	I0829 19:37:51.342625   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.342639   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:51.342650   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:51.342664   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:51.394045   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:51.394083   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:51.407902   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:51.407934   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:51.480759   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:51.480782   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:51.480797   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:51.565533   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:51.565570   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:49.671574   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:52.168702   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.237371   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:53.736122   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:54.430278   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.431024   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:54.107142   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:54.121083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:54.121141   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:54.156019   79869 cri.go:89] found id: ""
	I0829 19:37:54.156042   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.156050   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:54.156056   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:54.156106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:54.188748   79869 cri.go:89] found id: ""
	I0829 19:37:54.188772   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.188783   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:54.188790   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:54.188851   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:54.222044   79869 cri.go:89] found id: ""
	I0829 19:37:54.222079   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.222112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:54.222132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:54.222214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:54.254710   79869 cri.go:89] found id: ""
	I0829 19:37:54.254740   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.254750   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:54.254759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:54.254820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:54.292053   79869 cri.go:89] found id: ""
	I0829 19:37:54.292078   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.292086   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:54.292092   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:54.292153   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:54.330528   79869 cri.go:89] found id: ""
	I0829 19:37:54.330561   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.330573   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:54.330580   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:54.330653   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:54.363571   79869 cri.go:89] found id: ""
	I0829 19:37:54.363594   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.363602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:54.363608   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:54.363669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:54.395112   79869 cri.go:89] found id: ""
	I0829 19:37:54.395144   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.395166   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:54.395178   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:54.395192   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:54.408701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:54.408729   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:54.474198   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:54.474218   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:54.474231   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:54.555430   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:54.555467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.592858   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:54.592893   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.144165   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:57.157368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:57.157437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:57.194662   79869 cri.go:89] found id: ""
	I0829 19:37:57.194693   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.194706   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:57.194721   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:57.194784   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:57.226822   79869 cri.go:89] found id: ""
	I0829 19:37:57.226848   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.226856   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:57.226862   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:57.226910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:57.263892   79869 cri.go:89] found id: ""
	I0829 19:37:57.263932   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.263945   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:57.263955   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:57.264018   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:57.301202   79869 cri.go:89] found id: ""
	I0829 19:37:57.301243   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.301255   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:57.301261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:57.301317   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:57.335291   79869 cri.go:89] found id: ""
	I0829 19:37:57.335321   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.335337   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:57.335343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:57.335392   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:57.368961   79869 cri.go:89] found id: ""
	I0829 19:37:57.368983   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.368992   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:57.368997   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:57.369042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:57.401813   79869 cri.go:89] found id: ""
	I0829 19:37:57.401837   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.401844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:57.401850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:57.401906   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:57.434719   79869 cri.go:89] found id: ""
	I0829 19:37:57.434745   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.434756   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:57.434765   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:57.434777   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.484182   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:57.484217   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:57.497025   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:57.497051   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:57.569752   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:57.569776   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:57.569789   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:57.651276   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:57.651324   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.169824   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.668831   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.236564   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:58.736176   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:58.930996   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.931806   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.430980   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.189981   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:00.204723   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:00.204794   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:00.241677   79869 cri.go:89] found id: ""
	I0829 19:38:00.241700   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.241707   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:00.241713   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:00.241768   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:00.278692   79869 cri.go:89] found id: ""
	I0829 19:38:00.278726   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.278736   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:00.278744   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:00.278801   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:00.310418   79869 cri.go:89] found id: ""
	I0829 19:38:00.310448   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.310459   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:00.310466   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:00.310528   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:00.348423   79869 cri.go:89] found id: ""
	I0829 19:38:00.348446   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.348453   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:00.348459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:00.348511   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:00.380954   79869 cri.go:89] found id: ""
	I0829 19:38:00.380978   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.380985   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:00.380991   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:00.381043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:00.414783   79869 cri.go:89] found id: ""
	I0829 19:38:00.414812   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.414823   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:00.414831   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:00.414895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:00.450606   79869 cri.go:89] found id: ""
	I0829 19:38:00.450634   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.450642   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:00.450647   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:00.450696   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:00.485337   79869 cri.go:89] found id: ""
	I0829 19:38:00.485360   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.485375   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:00.485382   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:00.485399   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:00.551481   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:00.551502   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:00.551513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:00.630781   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:00.630819   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:00.676339   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:00.676363   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:00.728420   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:00.728452   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.243268   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:03.256259   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:03.256359   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:03.291103   79869 cri.go:89] found id: ""
	I0829 19:38:03.291131   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.291138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:03.291144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:03.291190   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:03.327866   79869 cri.go:89] found id: ""
	I0829 19:38:03.327898   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.327909   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:03.327917   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:03.327986   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:03.359082   79869 cri.go:89] found id: ""
	I0829 19:38:03.359110   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.359121   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:03.359129   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:03.359183   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:03.392714   79869 cri.go:89] found id: ""
	I0829 19:38:03.392741   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.392751   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:03.392758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:03.392823   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:03.427785   79869 cri.go:89] found id: ""
	I0829 19:38:03.427812   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.427820   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:03.427827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:03.427888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:03.463136   79869 cri.go:89] found id: ""
	I0829 19:38:03.463161   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.463171   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:03.463177   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:03.463230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:03.496188   79869 cri.go:89] found id: ""
	I0829 19:38:03.496225   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.496237   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:03.496244   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:03.496295   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:03.529566   79869 cri.go:89] found id: ""
	I0829 19:38:03.529591   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.529600   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:03.529609   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:03.529619   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:03.584787   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:03.584828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.599464   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:03.599509   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:03.676743   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:03.676763   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:03.676774   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:59.169059   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:01.668656   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.669716   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.736901   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.236263   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:05.431293   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:07.930953   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.757552   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:03.757605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.297887   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:06.311413   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:06.311498   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:06.345494   79869 cri.go:89] found id: ""
	I0829 19:38:06.345529   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.345539   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:06.345546   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:06.345605   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:06.377646   79869 cri.go:89] found id: ""
	I0829 19:38:06.377680   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.377691   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:06.377698   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:06.377809   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:06.416770   79869 cri.go:89] found id: ""
	I0829 19:38:06.416799   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.416810   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:06.416817   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:06.416869   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:06.451995   79869 cri.go:89] found id: ""
	I0829 19:38:06.452024   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.452034   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:06.452040   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:06.452095   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:06.484604   79869 cri.go:89] found id: ""
	I0829 19:38:06.484631   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.484642   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:06.484650   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:06.484713   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:06.517955   79869 cri.go:89] found id: ""
	I0829 19:38:06.517981   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.517988   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:06.517994   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:06.518053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:06.551069   79869 cri.go:89] found id: ""
	I0829 19:38:06.551100   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.551111   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:06.551118   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:06.551178   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:06.585340   79869 cri.go:89] found id: ""
	I0829 19:38:06.585367   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.585379   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:06.585389   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:06.585416   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:06.637942   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:06.637977   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:06.652097   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:06.652124   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:06.738226   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:06.738252   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:06.738268   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:06.817478   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:06.817519   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.168530   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:08.169657   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:05.736429   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:08.236731   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:09.931677   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.431484   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:09.360441   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:09.373372   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:09.373431   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:09.409942   79869 cri.go:89] found id: ""
	I0829 19:38:09.409970   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.409981   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:09.409989   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:09.410050   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:09.444611   79869 cri.go:89] found id: ""
	I0829 19:38:09.444639   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.444647   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:09.444652   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:09.444701   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:09.478206   79869 cri.go:89] found id: ""
	I0829 19:38:09.478233   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.478240   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:09.478246   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:09.478305   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:09.510313   79869 cri.go:89] found id: ""
	I0829 19:38:09.510340   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.510356   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:09.510361   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:09.510419   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:09.545380   79869 cri.go:89] found id: ""
	I0829 19:38:09.545412   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.545422   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:09.545429   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:09.545495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:09.578560   79869 cri.go:89] found id: ""
	I0829 19:38:09.578591   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.578600   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:09.578606   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:09.578659   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:09.613445   79869 cri.go:89] found id: ""
	I0829 19:38:09.613476   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.613484   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:09.613490   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:09.613540   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:09.649933   79869 cri.go:89] found id: ""
	I0829 19:38:09.649961   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.649970   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:09.649981   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:09.649998   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:09.662471   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:09.662496   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:09.728562   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:09.728594   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:09.728610   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:09.813152   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:09.813187   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:09.852846   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:09.852879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.403437   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:12.429787   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:12.429872   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:12.470833   79869 cri.go:89] found id: ""
	I0829 19:38:12.470858   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.470866   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:12.470871   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:12.470947   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:12.502307   79869 cri.go:89] found id: ""
	I0829 19:38:12.502334   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.502343   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:12.502351   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:12.502411   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:12.535084   79869 cri.go:89] found id: ""
	I0829 19:38:12.535108   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.535114   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:12.535120   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:12.535182   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:12.571735   79869 cri.go:89] found id: ""
	I0829 19:38:12.571762   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.571772   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:12.571779   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:12.571838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:12.604987   79869 cri.go:89] found id: ""
	I0829 19:38:12.605020   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.605029   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:12.605036   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:12.605093   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:12.639257   79869 cri.go:89] found id: ""
	I0829 19:38:12.639281   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.639289   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:12.639300   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:12.639362   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:12.674790   79869 cri.go:89] found id: ""
	I0829 19:38:12.674811   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.674818   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:12.674824   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:12.674877   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:12.711132   79869 cri.go:89] found id: ""
	I0829 19:38:12.711156   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.711164   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:12.711172   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:12.711184   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.763916   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:12.763950   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:12.777071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:12.777100   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:12.844974   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:12.845002   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:12.845017   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:12.924646   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:12.924682   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:10.668769   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.669771   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:10.736651   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.737433   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:15.236521   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:14.930832   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:16.931496   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:15.465319   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:15.478237   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:15.478315   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:15.510066   79869 cri.go:89] found id: ""
	I0829 19:38:15.510113   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.510124   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:15.510132   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:15.510180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:15.543094   79869 cri.go:89] found id: ""
	I0829 19:38:15.543117   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.543125   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:15.543138   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:15.543189   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:15.577253   79869 cri.go:89] found id: ""
	I0829 19:38:15.577279   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.577286   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:15.577292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:15.577352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:15.612073   79869 cri.go:89] found id: ""
	I0829 19:38:15.612107   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.612119   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:15.612128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:15.612196   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:15.645565   79869 cri.go:89] found id: ""
	I0829 19:38:15.645587   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.645595   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:15.645601   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:15.645646   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:15.679991   79869 cri.go:89] found id: ""
	I0829 19:38:15.680018   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.680027   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:15.680033   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:15.680109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:15.713899   79869 cri.go:89] found id: ""
	I0829 19:38:15.713923   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.713931   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:15.713937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:15.713991   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:15.750559   79869 cri.go:89] found id: ""
	I0829 19:38:15.750590   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.750601   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:15.750613   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:15.750628   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:15.762918   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:15.762943   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:15.832171   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:15.832195   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:15.832211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:15.913268   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:15.913311   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:15.951909   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:15.951935   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:18.501587   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:18.514136   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:18.514198   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:18.546937   79869 cri.go:89] found id: ""
	I0829 19:38:18.546977   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.546986   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:18.546994   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:18.547059   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:18.579227   79869 cri.go:89] found id: ""
	I0829 19:38:18.579256   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.579267   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:18.579275   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:18.579350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:18.610639   79869 cri.go:89] found id: ""
	I0829 19:38:18.610665   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.610673   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:18.610678   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:18.610739   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:18.642646   79869 cri.go:89] found id: ""
	I0829 19:38:18.642672   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.642680   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:18.642689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:18.642744   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:18.678244   79869 cri.go:89] found id: ""
	I0829 19:38:18.678264   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.678271   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:18.678277   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:18.678341   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:18.709787   79869 cri.go:89] found id: ""
	I0829 19:38:18.709812   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.709820   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:18.709826   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:18.709876   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:14.669989   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:17.169402   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:17.736005   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:20.236887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:19.430240   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:21.930946   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:18.743570   79869 cri.go:89] found id: ""
	I0829 19:38:18.743593   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.743602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:18.743610   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:18.743671   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:18.776790   79869 cri.go:89] found id: ""
	I0829 19:38:18.776815   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.776823   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:18.776831   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:18.776842   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:18.791736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:18.791765   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:18.880815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:18.880835   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:18.880849   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:18.969263   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:18.969304   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:19.005813   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:19.005843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.559810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:21.572617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:21.572682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:21.606221   79869 cri.go:89] found id: ""
	I0829 19:38:21.606245   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.606253   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:21.606259   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:21.606310   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:21.637794   79869 cri.go:89] found id: ""
	I0829 19:38:21.637822   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.637830   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:21.637835   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:21.637888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:21.671484   79869 cri.go:89] found id: ""
	I0829 19:38:21.671505   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.671515   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:21.671521   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:21.671576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:21.707212   79869 cri.go:89] found id: ""
	I0829 19:38:21.707240   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.707250   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:21.707257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:21.707320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:21.742944   79869 cri.go:89] found id: ""
	I0829 19:38:21.742964   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.742971   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:21.742977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:21.743023   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:21.779919   79869 cri.go:89] found id: ""
	I0829 19:38:21.779940   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.779947   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:21.779952   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:21.780007   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:21.819817   79869 cri.go:89] found id: ""
	I0829 19:38:21.819848   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.819858   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:21.819866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:21.819926   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:21.853791   79869 cri.go:89] found id: ""
	I0829 19:38:21.853817   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.853825   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:21.853833   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:21.853843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:21.890519   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:21.890550   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.943940   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:21.943972   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:21.956697   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:21.956724   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:22.030470   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:22.030495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:22.030513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:19.170077   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:21.670142   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:23.672076   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:22.237387   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:24.737069   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:23.934621   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:26.430632   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:24.608719   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:24.624343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:24.624403   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:24.679480   79869 cri.go:89] found id: ""
	I0829 19:38:24.679507   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.679514   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:24.679520   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:24.679589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:24.714065   79869 cri.go:89] found id: ""
	I0829 19:38:24.714114   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.714127   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:24.714134   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:24.714194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:24.751382   79869 cri.go:89] found id: ""
	I0829 19:38:24.751408   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.751417   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:24.751422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:24.751481   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:24.783549   79869 cri.go:89] found id: ""
	I0829 19:38:24.783573   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.783580   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:24.783588   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:24.783643   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:24.815500   79869 cri.go:89] found id: ""
	I0829 19:38:24.815524   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.815532   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:24.815539   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:24.815594   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:24.848225   79869 cri.go:89] found id: ""
	I0829 19:38:24.848249   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.848258   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:24.848264   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:24.848321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:24.880473   79869 cri.go:89] found id: ""
	I0829 19:38:24.880500   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.880511   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:24.880520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:24.880587   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:24.912907   79869 cri.go:89] found id: ""
	I0829 19:38:24.912941   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.912959   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:24.912967   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:24.912996   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:24.985389   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:24.985420   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:24.985437   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:25.060555   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:25.060591   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:25.099073   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:25.099099   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:25.149434   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:25.149473   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:27.664027   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:27.677971   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:27.678042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:27.715124   79869 cri.go:89] found id: ""
	I0829 19:38:27.715166   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.715179   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:27.715188   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:27.715255   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:27.748316   79869 cri.go:89] found id: ""
	I0829 19:38:27.748348   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.748361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:27.748370   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:27.748439   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:27.782075   79869 cri.go:89] found id: ""
	I0829 19:38:27.782116   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.782128   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:27.782137   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:27.782194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:27.821517   79869 cri.go:89] found id: ""
	I0829 19:38:27.821545   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.821554   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:27.821562   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:27.821621   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:27.853619   79869 cri.go:89] found id: ""
	I0829 19:38:27.853643   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.853654   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:27.853668   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:27.853723   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:27.886790   79869 cri.go:89] found id: ""
	I0829 19:38:27.886814   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.886822   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:27.886828   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:27.886883   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:27.920756   79869 cri.go:89] found id: ""
	I0829 19:38:27.920779   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.920789   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:27.920802   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:27.920861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:27.959241   79869 cri.go:89] found id: ""
	I0829 19:38:27.959267   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.959279   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:27.959289   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:27.959302   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:27.999922   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:27.999945   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:28.050616   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:28.050655   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:28.066437   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:28.066470   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:28.137427   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:28.137451   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:28.137466   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:26.168927   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:28.169453   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:27.235855   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:29.236537   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:28.929913   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:30.930403   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:32.931280   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:30.721890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:30.736387   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:30.736462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:30.773230   79869 cri.go:89] found id: ""
	I0829 19:38:30.773290   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.773304   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:30.773315   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:30.773382   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:30.806234   79869 cri.go:89] found id: ""
	I0829 19:38:30.806261   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.806271   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:30.806279   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:30.806344   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:30.841608   79869 cri.go:89] found id: ""
	I0829 19:38:30.841650   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.841674   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:30.841684   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:30.841751   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:30.875926   79869 cri.go:89] found id: ""
	I0829 19:38:30.875952   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.875960   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:30.875966   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:30.876020   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:30.914312   79869 cri.go:89] found id: ""
	I0829 19:38:30.914334   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.914341   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:30.914347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:30.914406   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:30.948819   79869 cri.go:89] found id: ""
	I0829 19:38:30.948854   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.948865   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:30.948876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:30.948937   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:30.980573   79869 cri.go:89] found id: ""
	I0829 19:38:30.980606   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.980617   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:30.980627   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:30.980688   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:31.012024   79869 cri.go:89] found id: ""
	I0829 19:38:31.012052   79869 logs.go:276] 0 containers: []
	W0829 19:38:31.012061   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:31.012071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:31.012089   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:31.076870   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:31.076896   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:31.076907   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:31.156257   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:31.156293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:31.192883   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:31.192911   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:31.246303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:31.246342   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:30.169738   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:32.669256   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:31.736303   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:34.235284   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:35.430450   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:37.931562   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:33.760372   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:33.773924   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:33.773998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:33.810019   79869 cri.go:89] found id: ""
	I0829 19:38:33.810047   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.810057   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:33.810064   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:33.810146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:33.848706   79869 cri.go:89] found id: ""
	I0829 19:38:33.848735   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.848747   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:33.848754   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:33.848822   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:33.880689   79869 cri.go:89] found id: ""
	I0829 19:38:33.880718   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.880731   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:33.880739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:33.880803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:33.911962   79869 cri.go:89] found id: ""
	I0829 19:38:33.911990   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.912000   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:33.912008   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:33.912071   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:33.948432   79869 cri.go:89] found id: ""
	I0829 19:38:33.948457   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.948468   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:33.948474   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:33.948534   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:33.981818   79869 cri.go:89] found id: ""
	I0829 19:38:33.981851   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.981859   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:33.981866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:33.981923   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:34.022072   79869 cri.go:89] found id: ""
	I0829 19:38:34.022108   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.022118   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:34.022125   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:34.022185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:34.055881   79869 cri.go:89] found id: ""
	I0829 19:38:34.055909   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.055920   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:34.055930   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:34.055944   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:34.133046   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:34.133079   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:34.175426   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:34.175457   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:34.228789   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:34.228825   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:34.243272   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:34.243322   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:34.318761   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:36.819665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:36.832516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:36.832604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:36.866781   79869 cri.go:89] found id: ""
	I0829 19:38:36.866815   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.866826   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:36.866833   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:36.866895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:36.903289   79869 cri.go:89] found id: ""
	I0829 19:38:36.903319   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.903329   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:36.903335   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:36.903383   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:36.936691   79869 cri.go:89] found id: ""
	I0829 19:38:36.936714   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.936722   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:36.936727   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:36.936776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:36.969496   79869 cri.go:89] found id: ""
	I0829 19:38:36.969525   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.969535   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:36.969541   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:36.969589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:37.001683   79869 cri.go:89] found id: ""
	I0829 19:38:37.001707   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.001715   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:37.001720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:37.001765   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:37.041189   79869 cri.go:89] found id: ""
	I0829 19:38:37.041212   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.041223   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:37.041231   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:37.041281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:37.077041   79869 cri.go:89] found id: ""
	I0829 19:38:37.077067   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.077075   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:37.077080   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:37.077135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:37.110478   79869 cri.go:89] found id: ""
	I0829 19:38:37.110506   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.110514   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:37.110523   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:37.110535   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:37.162560   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:37.162598   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:37.176466   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:37.176491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:37.244843   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:37.244861   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:37.244874   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:37.323324   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:37.323362   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:35.169023   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:37.668411   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:36.236332   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:38.236971   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:40.237468   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:39.932147   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.430752   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:39.864755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:39.877730   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:39.877789   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:39.909828   79869 cri.go:89] found id: ""
	I0829 19:38:39.909864   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.909874   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:39.909880   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:39.909941   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:39.943492   79869 cri.go:89] found id: ""
	I0829 19:38:39.943513   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.943521   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:39.943528   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:39.943586   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:39.976346   79869 cri.go:89] found id: ""
	I0829 19:38:39.976382   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.976393   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:39.976401   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:39.976455   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:40.008764   79869 cri.go:89] found id: ""
	I0829 19:38:40.008793   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.008803   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:40.008810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:40.008871   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:40.040324   79869 cri.go:89] found id: ""
	I0829 19:38:40.040356   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.040373   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:40.040381   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:40.040448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:40.072836   79869 cri.go:89] found id: ""
	I0829 19:38:40.072867   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.072880   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:40.072888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:40.072938   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:40.105437   79869 cri.go:89] found id: ""
	I0829 19:38:40.105462   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.105470   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:40.105476   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:40.105520   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:40.139447   79869 cri.go:89] found id: ""
	I0829 19:38:40.139480   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.139491   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:40.139502   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:40.139517   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.177799   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:40.177828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:40.227087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:40.227118   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:40.241116   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:40.241139   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:40.305556   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:40.305576   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:40.305590   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:42.886493   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:42.900941   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:42.901013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:42.938904   79869 cri.go:89] found id: ""
	I0829 19:38:42.938925   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.938933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:42.938946   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:42.939012   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:42.975186   79869 cri.go:89] found id: ""
	I0829 19:38:42.975213   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.975221   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:42.975227   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:42.975288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:43.009115   79869 cri.go:89] found id: ""
	I0829 19:38:43.009144   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.009152   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:43.009157   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:43.009207   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:43.044948   79869 cri.go:89] found id: ""
	I0829 19:38:43.044977   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.044987   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:43.044995   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:43.045057   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:43.079699   79869 cri.go:89] found id: ""
	I0829 19:38:43.079725   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.079732   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:43.079739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:43.079804   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:43.113742   79869 cri.go:89] found id: ""
	I0829 19:38:43.113770   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.113780   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:43.113788   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:43.113850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:43.151852   79869 cri.go:89] found id: ""
	I0829 19:38:43.151876   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.151884   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:43.151889   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:43.151939   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:43.190832   79869 cri.go:89] found id: ""
	I0829 19:38:43.190854   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.190862   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:43.190869   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:43.190882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:43.242651   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:43.242683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:43.256378   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:43.256403   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:43.333657   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:43.333684   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:43.333696   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:43.409811   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:43.409850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.170246   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.669492   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.737831   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:45.236831   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:44.930652   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:46.930941   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:45.947709   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:45.960937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:45.961013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:45.993198   79869 cri.go:89] found id: ""
	I0829 19:38:45.993230   79869 logs.go:276] 0 containers: []
	W0829 19:38:45.993242   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:45.993249   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:45.993303   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:46.031110   79869 cri.go:89] found id: ""
	I0829 19:38:46.031137   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.031148   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:46.031157   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:46.031212   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:46.065062   79869 cri.go:89] found id: ""
	I0829 19:38:46.065085   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.065093   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:46.065099   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:46.065155   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:46.099092   79869 cri.go:89] found id: ""
	I0829 19:38:46.099114   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.099122   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:46.099128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:46.099177   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:46.132426   79869 cri.go:89] found id: ""
	I0829 19:38:46.132450   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.132459   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:46.132464   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:46.132517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:46.165289   79869 cri.go:89] found id: ""
	I0829 19:38:46.165320   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.165337   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:46.165346   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:46.165415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:46.198761   79869 cri.go:89] found id: ""
	I0829 19:38:46.198786   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.198793   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:46.198799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:46.198859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:46.230621   79869 cri.go:89] found id: ""
	I0829 19:38:46.230649   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.230659   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:46.230669   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:46.230683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:46.280364   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:46.280398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:46.292854   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:46.292878   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:46.358673   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:46.358694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:46.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:46.439653   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:46.439688   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:44.669939   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:47.168670   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:47.735386   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:49.736163   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:49.431741   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:51.931271   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:48.975568   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:48.988793   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:48.988857   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:49.023697   79869 cri.go:89] found id: ""
	I0829 19:38:49.023721   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.023730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:49.023736   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:49.023791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:49.060131   79869 cri.go:89] found id: ""
	I0829 19:38:49.060153   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.060160   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:49.060166   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:49.060222   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:49.096069   79869 cri.go:89] found id: ""
	I0829 19:38:49.096101   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.096112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:49.096119   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:49.096185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:49.130316   79869 cri.go:89] found id: ""
	I0829 19:38:49.130347   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.130359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:49.130367   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:49.130434   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:49.162853   79869 cri.go:89] found id: ""
	I0829 19:38:49.162877   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.162890   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:49.162896   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:49.162956   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:49.198555   79869 cri.go:89] found id: ""
	I0829 19:38:49.198581   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.198592   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:49.198598   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:49.198663   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:49.232521   79869 cri.go:89] found id: ""
	I0829 19:38:49.232550   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.232560   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:49.232568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:49.232626   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:49.268094   79869 cri.go:89] found id: ""
	I0829 19:38:49.268124   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.268134   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:49.268145   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:49.268161   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:49.320884   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:49.320918   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:49.334244   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:49.334273   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:49.404442   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.404464   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:49.404479   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:49.482413   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:49.482451   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.021406   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:52.035517   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:52.035600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:52.068868   79869 cri.go:89] found id: ""
	I0829 19:38:52.068902   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.068909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:52.068915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:52.068971   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:52.100503   79869 cri.go:89] found id: ""
	I0829 19:38:52.100533   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.100542   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:52.100548   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:52.100620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:52.135148   79869 cri.go:89] found id: ""
	I0829 19:38:52.135189   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.135201   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:52.135208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:52.135276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:52.174469   79869 cri.go:89] found id: ""
	I0829 19:38:52.174498   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.174508   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:52.174516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:52.174576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:52.206485   79869 cri.go:89] found id: ""
	I0829 19:38:52.206508   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.206515   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:52.206520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:52.206568   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:52.240053   79869 cri.go:89] found id: ""
	I0829 19:38:52.240073   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.240080   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:52.240085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:52.240143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:52.274473   79869 cri.go:89] found id: ""
	I0829 19:38:52.274497   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.274506   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:52.274513   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:52.274576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:52.306646   79869 cri.go:89] found id: ""
	I0829 19:38:52.306669   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.306678   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:52.306686   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:52.306698   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:52.383558   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:52.383615   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.421958   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:52.421988   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:52.478024   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:52.478059   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:52.490736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:52.490772   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:52.555670   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.169856   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:51.669655   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:52.236654   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:54.735292   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:53.931350   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.430287   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:58.432418   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:55.056273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:55.068074   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:55.068147   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:55.102268   79869 cri.go:89] found id: ""
	I0829 19:38:55.102298   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.102309   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:55.102317   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:55.102368   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:55.133730   79869 cri.go:89] found id: ""
	I0829 19:38:55.133763   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.133773   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:55.133784   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:55.133848   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:55.168902   79869 cri.go:89] found id: ""
	I0829 19:38:55.168932   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.168942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:55.168949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:55.169015   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:55.206190   79869 cri.go:89] found id: ""
	I0829 19:38:55.206220   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.206231   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:55.206241   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:55.206326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:55.240178   79869 cri.go:89] found id: ""
	I0829 19:38:55.240213   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.240224   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:55.240233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:55.240313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:55.272532   79869 cri.go:89] found id: ""
	I0829 19:38:55.272559   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.272569   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:55.272575   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:55.272636   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:55.305427   79869 cri.go:89] found id: ""
	I0829 19:38:55.305457   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.305467   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:55.305473   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:55.305522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:55.337444   79869 cri.go:89] found id: ""
	I0829 19:38:55.337477   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.337489   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:55.337502   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:55.337518   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:55.402988   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:55.403019   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:55.403034   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:55.479168   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:55.479202   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:55.516345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:55.516372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:55.566716   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:55.566749   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.080261   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:58.093884   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:58.093944   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:58.126772   79869 cri.go:89] found id: ""
	I0829 19:38:58.126799   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.126808   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:58.126814   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:58.126861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:58.158344   79869 cri.go:89] found id: ""
	I0829 19:38:58.158373   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.158385   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:58.158393   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:58.158458   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:58.191524   79869 cri.go:89] found id: ""
	I0829 19:38:58.191550   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.191561   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:58.191569   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:58.191635   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:58.223336   79869 cri.go:89] found id: ""
	I0829 19:38:58.223362   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.223370   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:58.223375   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:58.223433   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:58.256223   79869 cri.go:89] found id: ""
	I0829 19:38:58.256248   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.256256   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:58.256262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:58.256321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:58.290008   79869 cri.go:89] found id: ""
	I0829 19:38:58.290035   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.290044   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:58.290049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:58.290112   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:58.324441   79869 cri.go:89] found id: ""
	I0829 19:38:58.324471   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.324488   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:58.324495   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:58.324554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:58.357324   79869 cri.go:89] found id: ""
	I0829 19:38:58.357351   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.357361   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:58.357378   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:58.357394   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.370251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:58.370277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:58.461098   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:58.461123   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:58.461138   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:58.537222   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:58.537255   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:58.574012   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:58.574043   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:54.170237   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.668188   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:58.668309   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.736467   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:59.236483   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:00.930424   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:02.931161   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:01.125646   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:01.138389   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:01.138464   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:01.172278   79869 cri.go:89] found id: ""
	I0829 19:39:01.172305   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.172313   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:01.172319   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:01.172375   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:01.207408   79869 cri.go:89] found id: ""
	I0829 19:39:01.207444   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.207455   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:01.207462   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:01.207522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:01.242683   79869 cri.go:89] found id: ""
	I0829 19:39:01.242711   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.242721   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:01.242729   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:01.242788   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:01.275683   79869 cri.go:89] found id: ""
	I0829 19:39:01.275714   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.275730   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:01.275738   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:01.275803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:01.308039   79869 cri.go:89] found id: ""
	I0829 19:39:01.308063   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.308071   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:01.308078   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:01.308137   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:01.344382   79869 cri.go:89] found id: ""
	I0829 19:39:01.344406   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.344413   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:01.344418   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:01.344466   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:01.379942   79869 cri.go:89] found id: ""
	I0829 19:39:01.379964   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.379972   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:01.379977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:01.380021   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:01.414955   79869 cri.go:89] found id: ""
	I0829 19:39:01.414981   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.414989   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:01.414997   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:01.415008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:01.469174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:01.469206   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:01.482719   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:01.482743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:01.546713   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:01.546731   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:01.546742   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:01.630655   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:01.630689   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:00.668839   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:02.670762   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:01.236788   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:03.237406   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:05.430398   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.431044   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:04.167940   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:04.180881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:04.180948   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:04.214782   79869 cri.go:89] found id: ""
	I0829 19:39:04.214809   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.214818   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:04.214824   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:04.214878   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:04.248274   79869 cri.go:89] found id: ""
	I0829 19:39:04.248300   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.248309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:04.248316   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:04.248378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:04.280622   79869 cri.go:89] found id: ""
	I0829 19:39:04.280648   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.280657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:04.280681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:04.280749   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:04.313715   79869 cri.go:89] found id: ""
	I0829 19:39:04.313746   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.313754   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:04.313759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:04.313806   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:04.345179   79869 cri.go:89] found id: ""
	I0829 19:39:04.345201   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.345209   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:04.345214   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:04.345264   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:04.377264   79869 cri.go:89] found id: ""
	I0829 19:39:04.377294   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.377304   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:04.377315   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:04.377378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:04.410005   79869 cri.go:89] found id: ""
	I0829 19:39:04.410028   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.410034   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:04.410039   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:04.410109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:04.444345   79869 cri.go:89] found id: ""
	I0829 19:39:04.444373   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.444383   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:04.444393   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:04.444409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:04.488071   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:04.488103   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:04.539394   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:04.539427   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:04.552285   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:04.552320   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:04.620973   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:04.620992   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:04.621006   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.201149   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:07.213392   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:07.213452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:07.249778   79869 cri.go:89] found id: ""
	I0829 19:39:07.249801   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.249812   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:07.249817   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:07.249864   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:07.282763   79869 cri.go:89] found id: ""
	I0829 19:39:07.282792   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.282799   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:07.282805   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:07.282852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:07.316882   79869 cri.go:89] found id: ""
	I0829 19:39:07.316920   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.316932   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:07.316940   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:07.316990   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:07.348474   79869 cri.go:89] found id: ""
	I0829 19:39:07.348505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.348516   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:07.348532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:07.348606   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:07.381442   79869 cri.go:89] found id: ""
	I0829 19:39:07.381467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.381474   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:07.381479   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:07.381535   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:07.414935   79869 cri.go:89] found id: ""
	I0829 19:39:07.414968   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.414981   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:07.414990   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:07.415053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:07.448427   79869 cri.go:89] found id: ""
	I0829 19:39:07.448467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.448479   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:07.448486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:07.448544   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:07.480475   79869 cri.go:89] found id: ""
	I0829 19:39:07.480505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.480515   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:07.480526   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:07.480540   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:07.532732   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:07.532766   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:07.546366   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:07.546411   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:07.615661   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:07.615679   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:07.615690   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.696874   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:07.696909   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:05.169920   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.170223   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:05.735375   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.737017   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:10.235794   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:09.930945   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:11.931285   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:10.236118   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:10.249347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:10.249413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:10.280412   79869 cri.go:89] found id: ""
	I0829 19:39:10.280436   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.280446   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:10.280451   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:10.280499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:10.313091   79869 cri.go:89] found id: ""
	I0829 19:39:10.313119   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.313126   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:10.313132   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:10.313187   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:10.347208   79869 cri.go:89] found id: ""
	I0829 19:39:10.347243   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.347252   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:10.347257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:10.347306   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:10.380658   79869 cri.go:89] found id: ""
	I0829 19:39:10.380686   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.380696   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:10.380703   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:10.380750   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:10.412573   79869 cri.go:89] found id: ""
	I0829 19:39:10.412601   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.412613   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:10.412621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:10.412682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:10.449655   79869 cri.go:89] found id: ""
	I0829 19:39:10.449683   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.449691   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:10.449698   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:10.449759   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:10.485157   79869 cri.go:89] found id: ""
	I0829 19:39:10.485184   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.485195   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:10.485203   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:10.485262   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:10.522628   79869 cri.go:89] found id: ""
	I0829 19:39:10.522656   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.522666   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:10.522673   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:10.522684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:10.541079   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:10.541114   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:10.633462   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:10.633495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:10.633512   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:10.714315   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:10.714354   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:10.751345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:10.751371   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.306786   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:13.319368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:13.319447   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:13.353999   79869 cri.go:89] found id: ""
	I0829 19:39:13.354029   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.354039   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:13.354047   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:13.354124   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:13.386953   79869 cri.go:89] found id: ""
	I0829 19:39:13.386982   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.386992   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:13.387000   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:13.387053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:13.425835   79869 cri.go:89] found id: ""
	I0829 19:39:13.425860   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.425869   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:13.425876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:13.425942   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:13.462808   79869 cri.go:89] found id: ""
	I0829 19:39:13.462835   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.462843   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:13.462849   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:13.462895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:13.495194   79869 cri.go:89] found id: ""
	I0829 19:39:13.495228   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.495240   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:13.495248   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:13.495309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:13.527239   79869 cri.go:89] found id: ""
	I0829 19:39:13.527268   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.527277   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:13.527283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:13.527357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:13.559081   79869 cri.go:89] found id: ""
	I0829 19:39:13.559110   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.559121   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:13.559128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:13.559191   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:13.590723   79869 cri.go:89] found id: ""
	I0829 19:39:13.590748   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.590757   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:13.590767   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:13.590781   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.645718   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:13.645751   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:13.659224   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:13.659250   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:13.733532   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:13.733566   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:13.733580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:09.669065   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:12.169167   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:12.236756   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:14.237536   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:14.431203   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:16.930983   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:13.813639   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:13.813680   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.355269   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:16.377328   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:16.377395   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:16.437904   79869 cri.go:89] found id: ""
	I0829 19:39:16.437926   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.437933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:16.437939   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:16.437987   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:16.470254   79869 cri.go:89] found id: ""
	I0829 19:39:16.470279   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.470287   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:16.470293   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:16.470353   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:16.502125   79869 cri.go:89] found id: ""
	I0829 19:39:16.502165   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.502177   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:16.502186   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:16.502242   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:16.539754   79869 cri.go:89] found id: ""
	I0829 19:39:16.539781   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.539791   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:16.539799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:16.539862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:16.576191   79869 cri.go:89] found id: ""
	I0829 19:39:16.576218   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.576229   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:16.576236   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:16.576292   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:16.610183   79869 cri.go:89] found id: ""
	I0829 19:39:16.610208   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.610219   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:16.610226   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:16.610285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:16.642568   79869 cri.go:89] found id: ""
	I0829 19:39:16.642605   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.642614   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:16.642624   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:16.642689   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:16.675990   79869 cri.go:89] found id: ""
	I0829 19:39:16.676017   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.676025   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:16.676033   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:16.676049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:16.739204   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:16.739222   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:16.739233   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:16.816427   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:16.816460   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.851816   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:16.851850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:16.903922   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:16.903958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:14.169307   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:16.163640   79073 pod_ready.go:82] duration metric: took 4m0.000694226s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" ...
	E0829 19:39:16.163683   79073 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:39:16.163706   79073 pod_ready.go:39] duration metric: took 4m12.036045825s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:16.163738   79073 kubeadm.go:597] duration metric: took 4m20.35086556s to restartPrimaryControlPlane
	W0829 19:39:16.163795   79073 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:16.163827   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:16.736978   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.236047   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.431674   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:21.930447   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.418163   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:19.432617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:19.432676   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:19.464691   79869 cri.go:89] found id: ""
	I0829 19:39:19.464718   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.464730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:19.464737   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:19.464793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:19.496265   79869 cri.go:89] found id: ""
	I0829 19:39:19.496291   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.496302   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:19.496310   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:19.496397   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:19.527395   79869 cri.go:89] found id: ""
	I0829 19:39:19.527422   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.527433   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:19.527440   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:19.527501   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:19.558377   79869 cri.go:89] found id: ""
	I0829 19:39:19.558404   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.558414   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:19.558422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:19.558484   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:19.589687   79869 cri.go:89] found id: ""
	I0829 19:39:19.589710   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.589718   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:19.589724   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:19.589813   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:19.624051   79869 cri.go:89] found id: ""
	I0829 19:39:19.624077   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.624086   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:19.624097   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:19.624143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:19.656248   79869 cri.go:89] found id: ""
	I0829 19:39:19.656282   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.656293   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:19.656301   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:19.656364   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:19.689299   79869 cri.go:89] found id: ""
	I0829 19:39:19.689328   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.689338   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:19.689349   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:19.689365   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:19.739952   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:19.739982   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:19.753169   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:19.753197   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:19.816948   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:19.816971   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:19.816983   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:19.892233   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:19.892270   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.432456   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:22.444842   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:22.444915   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:22.475864   79869 cri.go:89] found id: ""
	I0829 19:39:22.475888   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.475899   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:22.475907   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:22.475954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:22.506824   79869 cri.go:89] found id: ""
	I0829 19:39:22.506851   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.506858   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:22.506864   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:22.506909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:22.544960   79869 cri.go:89] found id: ""
	I0829 19:39:22.544984   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.545002   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:22.545009   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:22.545074   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:22.584077   79869 cri.go:89] found id: ""
	I0829 19:39:22.584098   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.584106   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:22.584114   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:22.584169   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:22.621180   79869 cri.go:89] found id: ""
	I0829 19:39:22.621208   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.621220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:22.621228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:22.621288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:22.658111   79869 cri.go:89] found id: ""
	I0829 19:39:22.658139   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.658151   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:22.658158   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:22.658220   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:22.695654   79869 cri.go:89] found id: ""
	I0829 19:39:22.695679   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.695686   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:22.695692   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:22.695742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:22.733092   79869 cri.go:89] found id: ""
	I0829 19:39:22.733169   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.733184   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:22.733196   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:22.733212   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:22.808449   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:22.808469   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:22.808485   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:22.889239   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:22.889275   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.933487   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:22.933513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:22.983137   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:22.983178   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:21.236189   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:23.236347   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:25.237213   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:23.932634   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:26.431145   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:28.431496   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:25.496668   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:25.509508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:25.509572   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:25.544292   79869 cri.go:89] found id: ""
	I0829 19:39:25.544321   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.544334   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:25.544341   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:25.544400   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:25.576739   79869 cri.go:89] found id: ""
	I0829 19:39:25.576768   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.576779   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:25.576787   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:25.576840   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:25.608040   79869 cri.go:89] found id: ""
	I0829 19:39:25.608067   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.608075   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:25.608081   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:25.608127   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:25.639675   79869 cri.go:89] found id: ""
	I0829 19:39:25.639703   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.639712   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:25.639720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:25.639785   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:25.676966   79869 cri.go:89] found id: ""
	I0829 19:39:25.676995   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.677007   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:25.677014   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:25.677075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:25.712310   79869 cri.go:89] found id: ""
	I0829 19:39:25.712334   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.712341   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:25.712347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:25.712393   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:25.746172   79869 cri.go:89] found id: ""
	I0829 19:39:25.746196   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.746203   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:25.746208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:25.746257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:25.778476   79869 cri.go:89] found id: ""
	I0829 19:39:25.778497   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.778506   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:25.778514   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:25.778525   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:25.817791   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:25.817820   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:25.874597   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:25.874634   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:25.887469   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:25.887493   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:25.957308   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:25.957329   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:25.957348   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:28.536826   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:28.550981   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:28.551038   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:28.586607   79869 cri.go:89] found id: ""
	I0829 19:39:28.586636   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.586647   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:28.586656   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:28.586716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:28.627696   79869 cri.go:89] found id: ""
	I0829 19:39:28.627720   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.627728   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:28.627734   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:28.627793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:28.659877   79869 cri.go:89] found id: ""
	I0829 19:39:28.659906   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.659915   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:28.659920   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:28.659967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:28.694834   79869 cri.go:89] found id: ""
	I0829 19:39:28.694861   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.694868   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:28.694874   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:28.694934   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:28.728833   79869 cri.go:89] found id: ""
	I0829 19:39:28.728866   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.728878   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:28.728888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:28.728951   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:27.237871   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:29.735887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:30.931849   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:33.424593   79559 pod_ready.go:82] duration metric: took 4m0.000177098s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" ...
	E0829 19:39:33.424633   79559 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:39:33.424656   79559 pod_ready.go:39] duration metric: took 4m10.047294609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:33.424687   79559 kubeadm.go:597] duration metric: took 4m17.474785939s to restartPrimaryControlPlane
	W0829 19:39:33.424745   79559 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:33.424773   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:28.762236   79869 cri.go:89] found id: ""
	I0829 19:39:28.762269   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.762279   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:28.762286   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:28.762352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:28.794534   79869 cri.go:89] found id: ""
	I0829 19:39:28.794570   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.794583   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:28.794590   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:28.794660   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:28.827193   79869 cri.go:89] found id: ""
	I0829 19:39:28.827222   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.827233   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:28.827244   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:28.827260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:28.878905   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:28.878936   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:28.891795   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:28.891826   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:28.966249   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:28.966278   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:28.966294   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:29.044383   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:29.044417   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.582383   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:31.595250   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:31.595333   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:31.628763   79869 cri.go:89] found id: ""
	I0829 19:39:31.628791   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.628800   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:31.628805   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:31.628852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:31.663489   79869 cri.go:89] found id: ""
	I0829 19:39:31.663521   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.663531   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:31.663537   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:31.663598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:31.698248   79869 cri.go:89] found id: ""
	I0829 19:39:31.698275   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.698283   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:31.698289   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:31.698340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:31.732499   79869 cri.go:89] found id: ""
	I0829 19:39:31.732527   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.732536   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:31.732544   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:31.732601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:31.773831   79869 cri.go:89] found id: ""
	I0829 19:39:31.773853   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.773861   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:31.773866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:31.773909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:31.807713   79869 cri.go:89] found id: ""
	I0829 19:39:31.807739   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.807747   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:31.807753   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:31.807814   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:31.841846   79869 cri.go:89] found id: ""
	I0829 19:39:31.841874   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.841881   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:31.841887   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:31.841945   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:31.872713   79869 cri.go:89] found id: ""
	I0829 19:39:31.872736   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.872749   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:31.872760   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:31.872773   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:31.926299   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:31.926335   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:31.941134   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:31.941174   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:32.010600   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:32.010623   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:32.010638   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:32.091972   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:32.092008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.737021   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:34.236447   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:34.631695   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:34.644986   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:34.645051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:34.679788   79869 cri.go:89] found id: ""
	I0829 19:39:34.679816   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.679823   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:34.679832   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:34.679881   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:34.713113   79869 cri.go:89] found id: ""
	I0829 19:39:34.713139   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.713147   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:34.713152   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:34.713204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:34.745410   79869 cri.go:89] found id: ""
	I0829 19:39:34.745439   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.745451   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:34.745459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:34.745517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:34.779089   79869 cri.go:89] found id: ""
	I0829 19:39:34.779117   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.779125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:34.779132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:34.779179   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:34.810966   79869 cri.go:89] found id: ""
	I0829 19:39:34.810995   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.811004   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:34.811011   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:34.811075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:34.844859   79869 cri.go:89] found id: ""
	I0829 19:39:34.844894   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.844901   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:34.844907   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:34.844954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:34.876014   79869 cri.go:89] found id: ""
	I0829 19:39:34.876036   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.876044   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:34.876050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:34.876097   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:34.909383   79869 cri.go:89] found id: ""
	I0829 19:39:34.909412   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.909421   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:34.909429   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:34.909440   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:34.956841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:34.956875   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:34.969399   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:34.969423   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:35.034539   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:35.034574   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:35.034589   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:35.109702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:35.109743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:37.644897   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:37.658600   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:37.658665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:37.693604   79869 cri.go:89] found id: ""
	I0829 19:39:37.693638   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.693646   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:37.693655   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:37.693763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:37.727504   79869 cri.go:89] found id: ""
	I0829 19:39:37.727531   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.727538   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:37.727546   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:37.727598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:37.762755   79869 cri.go:89] found id: ""
	I0829 19:39:37.762778   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.762786   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:37.762792   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:37.762838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:37.799571   79869 cri.go:89] found id: ""
	I0829 19:39:37.799600   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.799611   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:37.799619   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:37.799669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:37.833599   79869 cri.go:89] found id: ""
	I0829 19:39:37.833632   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.833644   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:37.833651   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:37.833714   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:37.867877   79869 cri.go:89] found id: ""
	I0829 19:39:37.867901   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.867909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:37.867916   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:37.867968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:37.901439   79869 cri.go:89] found id: ""
	I0829 19:39:37.901467   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.901475   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:37.901480   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:37.901527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:37.936983   79869 cri.go:89] found id: ""
	I0829 19:39:37.937008   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.937016   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:37.937024   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:37.937035   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:38.016873   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:38.016917   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:38.052565   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:38.052605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:38.102174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:38.102210   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:38.115273   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:38.115298   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:38.186012   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:36.736406   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:39.235941   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:42.401382   79073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.237529155s)
	I0829 19:39:42.401460   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:42.428754   79073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:42.441896   79073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:42.456122   79073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:42.456147   79073 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:42.456190   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:42.471887   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:42.471947   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:42.483709   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:42.493000   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:42.493070   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:42.511916   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:42.520829   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:42.520891   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:42.530567   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:42.540199   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:42.540252   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:42.550058   79073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:42.596809   79073 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:39:42.596966   79073 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:42.706623   79073 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:42.706766   79073 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:42.706931   79073 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:39:42.717740   79073 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:40.686558   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:40.699240   79869 kubeadm.go:597] duration metric: took 4m4.589527641s to restartPrimaryControlPlane
	W0829 19:39:40.699313   79869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:40.699343   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:42.719760   79073 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:42.719862   79073 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:42.719929   79073 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:42.720023   79073 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:42.720079   79073 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:42.720144   79073 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:42.720193   79073 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:42.720248   79073 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:42.720315   79073 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:42.720386   79073 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:42.720459   79073 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:42.720496   79073 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:42.720555   79073 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:42.827328   79073 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:43.276222   79073 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:39:43.445594   79073 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:43.554811   79073 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:43.788184   79073 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:43.788791   79073 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:43.791871   79073 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:43.794448   79073 out.go:235]   - Booting up control plane ...
	I0829 19:39:43.794600   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:43.794702   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:43.794800   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:43.813894   79073 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:43.822272   79073 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:43.822357   79073 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:44.450706   79869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.75133723s)
	I0829 19:39:44.450782   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:44.464692   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:44.473894   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:44.483464   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:44.483483   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:44.483524   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:44.492228   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:44.492277   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:44.501349   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:44.510241   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:44.510295   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:44.519210   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.528256   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:44.528314   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.537658   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:44.546976   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:44.547027   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:44.556823   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:44.630397   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:39:44.630474   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:44.771729   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:44.771869   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:44.772018   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:39:44.944512   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:41.236034   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:43.236446   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:45.237605   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:44.947210   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:44.947320   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:44.947422   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:44.947540   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:44.947658   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:44.947781   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:44.947881   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:44.950819   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:44.950926   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:44.951022   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:44.951125   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:44.951174   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:44.951244   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:45.171698   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:45.287539   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:45.443576   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:45.594891   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:45.609143   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:45.610374   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:45.610440   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:45.746839   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:45.748753   79869 out.go:235]   - Booting up control plane ...
	I0829 19:39:45.748882   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:45.753577   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:45.754588   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:45.755463   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:45.760295   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:39:43.950283   79073 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:39:43.950458   79073 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:39:44.452956   79073 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.82915ms
	I0829 19:39:44.453068   79073 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:39:49.455000   79073 kubeadm.go:310] [api-check] The API server is healthy after 5.001789194s
	I0829 19:39:49.473145   79073 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:39:49.496760   79073 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:39:49.530950   79073 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:39:49.531148   79073 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-920571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:39:49.548546   79073 kubeadm.go:310] [bootstrap-token] Using token: bc4428.p8e3szrujohqnvnh
	I0829 19:39:47.735610   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:49.735833   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:49.549992   79073 out.go:235]   - Configuring RBAC rules ...
	I0829 19:39:49.550151   79073 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:39:49.558070   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:39:49.573758   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:39:49.579988   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:39:49.585250   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:39:49.592477   79073 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:39:49.863168   79073 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:39:50.294056   79073 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:39:50.862652   79073 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:39:50.863644   79073 kubeadm.go:310] 
	I0829 19:39:50.863717   79073 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:39:50.863729   79073 kubeadm.go:310] 
	I0829 19:39:50.863861   79073 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:39:50.863881   79073 kubeadm.go:310] 
	I0829 19:39:50.863917   79073 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:39:50.864019   79073 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:39:50.864101   79073 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:39:50.864111   79073 kubeadm.go:310] 
	I0829 19:39:50.864215   79073 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:39:50.864225   79073 kubeadm.go:310] 
	I0829 19:39:50.864298   79073 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:39:50.864312   79073 kubeadm.go:310] 
	I0829 19:39:50.864398   79073 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:39:50.864517   79073 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:39:50.864617   79073 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:39:50.864631   79073 kubeadm.go:310] 
	I0829 19:39:50.864743   79073 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:39:50.864856   79073 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:39:50.864869   79073 kubeadm.go:310] 
	I0829 19:39:50.864983   79073 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bc4428.p8e3szrujohqnvnh \
	I0829 19:39:50.865110   79073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:39:50.865142   79073 kubeadm.go:310] 	--control-plane 
	I0829 19:39:50.865152   79073 kubeadm.go:310] 
	I0829 19:39:50.865262   79073 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:39:50.865270   79073 kubeadm.go:310] 
	I0829 19:39:50.865370   79073 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bc4428.p8e3szrujohqnvnh \
	I0829 19:39:50.865527   79073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:39:50.866485   79073 kubeadm.go:310] W0829 19:39:42.565022    2509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:39:50.866852   79073 kubeadm.go:310] W0829 19:39:42.566073    2509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:39:50.866979   79073 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:39:50.867009   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:39:50.867020   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:39:50.868683   79073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:39:50.869952   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:39:50.880385   79073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:39:50.900028   79073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:39:50.900152   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:50.900187   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-920571 minikube.k8s.io/updated_at=2024_08_29T19_39_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=embed-certs-920571 minikube.k8s.io/primary=true
	I0829 19:39:51.090710   79073 ops.go:34] apiserver oom_adj: -16
	I0829 19:39:51.090865   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:51.591720   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:52.091579   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:52.591872   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:53.091671   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:53.591191   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.091640   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.591356   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.674005   79073 kubeadm.go:1113] duration metric: took 3.773916232s to wait for elevateKubeSystemPrivileges
	I0829 19:39:54.674046   79073 kubeadm.go:394] duration metric: took 4m58.910639816s to StartCluster
	I0829 19:39:54.674070   79073 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:39:54.674178   79073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:39:54.675789   79073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:39:54.676038   79073 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:39:54.676095   79073 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:39:54.676184   79073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-920571"
	I0829 19:39:54.676210   79073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-920571"
	I0829 19:39:54.676222   79073 addons.go:69] Setting metrics-server=true in profile "embed-certs-920571"
	I0829 19:39:54.676225   79073 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:39:54.676241   79073 addons.go:234] Setting addon metrics-server=true in "embed-certs-920571"
	I0829 19:39:54.676264   79073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-920571"
	I0829 19:39:54.676216   79073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-920571"
	W0829 19:39:54.676329   79073 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:39:54.676360   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	W0829 19:39:54.676392   79073 addons.go:243] addon metrics-server should already be in state true
	I0829 19:39:54.676455   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	I0829 19:39:54.676650   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676664   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676682   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.676684   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.676824   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676859   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.677794   79073 out.go:177] * Verifying Kubernetes components...
	I0829 19:39:54.679112   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:39:54.694669   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0829 19:39:54.694717   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0829 19:39:54.695090   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.695420   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.695532   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35861
	I0829 19:39:54.695640   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.695656   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.695925   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.695948   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.695951   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.696038   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.696266   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.696373   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.696392   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.696443   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.696600   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.696629   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.696745   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.697378   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.697413   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.702955   79073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-920571"
	W0829 19:39:54.702978   79073 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:39:54.703003   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	I0829 19:39:54.703347   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.703377   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.714194   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0829 19:39:54.714526   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0829 19:39:54.714735   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.714916   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.715368   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.715369   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.715389   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.715401   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.715712   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.715713   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.715944   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.715943   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.717556   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.717758   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.718972   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39097
	I0829 19:39:54.719212   79073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:39:54.719303   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.719212   79073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:39:52.236231   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:54.238843   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:54.719723   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.719735   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.720033   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.720307   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:39:54.720322   79073 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:39:54.720342   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.720533   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.720559   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.720952   79073 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:39:54.720975   79073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:39:54.720992   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.723754   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724174   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.724198   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724516   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.724684   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.724820   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.724879   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724973   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.725426   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.725466   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.725687   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.725827   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.725982   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.726117   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.743443   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37853
	I0829 19:39:54.744025   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.744590   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.744618   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.744908   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.745030   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.746560   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.746809   79073 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:39:54.746819   79073 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:39:54.746831   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.749422   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.749802   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.749827   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.749904   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.750058   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.750206   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.750320   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.902922   79073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:39:54.921933   79073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-920571" to be "Ready" ...
	I0829 19:39:54.936483   79073 node_ready.go:49] node "embed-certs-920571" has status "Ready":"True"
	I0829 19:39:54.936513   79073 node_ready.go:38] duration metric: took 14.542582ms for node "embed-certs-920571" to be "Ready" ...
	I0829 19:39:54.936524   79073 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:54.945389   79073 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:55.076394   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:39:55.076421   79073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:39:55.089140   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:39:55.096473   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:39:55.128207   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:39:55.128235   79073 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:39:55.186402   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:39:55.186429   79073 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:39:55.262731   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:39:55.548177   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.548217   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.548521   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.548542   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:55.548555   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.548564   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.548824   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.548857   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Closing plugin on server side
	I0829 19:39:55.548872   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:55.555956   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.555971   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.556210   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.556227   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.020289   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.020317   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.020610   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.020632   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.020642   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.020650   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.020912   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.020931   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.369749   79073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.106975723s)
	I0829 19:39:56.369809   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.369825   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.370119   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.370143   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.370154   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.370168   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.370407   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.370428   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.370440   79073 addons.go:475] Verifying addon metrics-server=true in "embed-certs-920571"
	I0829 19:39:56.373030   79073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 19:39:56.374322   79073 addons.go:510] duration metric: took 1.698226444s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 19:39:56.460329   79073 pod_ready.go:93] pod "etcd-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:39:56.460362   79073 pod_ready.go:82] duration metric: took 1.51494335s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:56.460375   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:58.467017   79073 pod_ready.go:93] pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:39:58.467040   79073 pod_ready.go:82] duration metric: took 2.006657264s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:58.467050   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:59.826535   79559 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.4017346s)
	I0829 19:39:59.826609   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:59.849311   79559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:59.859855   79559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:59.874237   79559 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:59.874262   79559 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:59.874315   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 19:39:59.883719   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:59.883785   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:59.893307   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 19:39:59.902478   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:59.902519   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:59.912664   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 19:39:59.932387   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:59.932443   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:59.948358   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 19:39:59.965812   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:59.965867   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:59.975437   79559 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:40:00.022167   79559 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:40:00.022347   79559 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:40:00.126622   79559 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:40:00.126777   79559 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:40:00.126914   79559 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:40:00.135123   79559 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:56.736712   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:59.235639   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:00.137714   79559 out.go:235]   - Generating certificates and keys ...
	I0829 19:40:00.137806   79559 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:40:00.137875   79559 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:40:00.138003   79559 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:40:00.138114   79559 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:40:00.138184   79559 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:40:00.138240   79559 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:40:00.138297   79559 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:40:00.138351   79559 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:40:00.138443   79559 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:40:00.138555   79559 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:40:00.138607   79559 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:40:00.138682   79559 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:40:00.368674   79559 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:40:00.454426   79559 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:40:00.576835   79559 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:40:00.650342   79559 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:40:01.038392   79559 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:40:01.038806   79559 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:40:01.041297   79559 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:40:01.043020   79559 out.go:235]   - Booting up control plane ...
	I0829 19:40:01.043127   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:40:01.043224   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:40:01.043501   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:40:01.062342   79559 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:40:01.068185   79559 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:40:01.068247   79559 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:40:01.202906   79559 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:40:01.203076   79559 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:40:01.705241   79559 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.336154ms
	I0829 19:40:01.705368   79559 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:40:00.476336   79073 pod_ready.go:103] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:02.973188   79073 pod_ready.go:103] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:03.473576   79073 pod_ready.go:93] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.473607   79073 pod_ready.go:82] duration metric: took 5.006550689s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.473616   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25cmq" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.478026   79073 pod_ready.go:93] pod "kube-proxy-25cmq" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.478045   79073 pod_ready.go:82] duration metric: took 4.423884ms for pod "kube-proxy-25cmq" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.478054   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.482541   79073 pod_ready.go:93] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.482560   79073 pod_ready.go:82] duration metric: took 4.499742ms for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.482566   79073 pod_ready.go:39] duration metric: took 8.54603076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:03.482581   79073 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:40:03.482623   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:40:03.502670   79073 api_server.go:72] duration metric: took 8.826595134s to wait for apiserver process to appear ...
	I0829 19:40:03.502695   79073 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:40:03.502718   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:40:03.507953   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0829 19:40:03.508948   79073 api_server.go:141] control plane version: v1.31.0
	I0829 19:40:03.508968   79073 api_server.go:131] duration metric: took 6.265433ms to wait for apiserver health ...
	I0829 19:40:03.508977   79073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:40:03.514929   79073 system_pods.go:59] 9 kube-system pods found
	I0829 19:40:03.514962   79073 system_pods.go:61] "coredns-6f6b679f8f-8qrn6" [af312704-4ea9-432d-85b2-67c59231187f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:03.514971   79073 system_pods.go:61] "coredns-6f6b679f8f-9f75n" [80f3b51d-fced-4cd9-8c43-2a4eea28e470] Running
	I0829 19:40:03.514979   79073 system_pods.go:61] "etcd-embed-certs-920571" [47af52f8-1d18-41fd-b013-e53fe813e4cf] Running
	I0829 19:40:03.514987   79073 system_pods.go:61] "kube-apiserver-embed-certs-920571" [1e9c3d77-55e1-4998-a2af-430254fde431] Running
	I0829 19:40:03.514994   79073 system_pods.go:61] "kube-controller-manager-embed-certs-920571" [08b33c9f-5858-41b7-a190-f34b611203ee] Running
	I0829 19:40:03.515000   79073 system_pods.go:61] "kube-proxy-25cmq" [35ecfe58-b448-4db0-b4cc-434422ec4ca6] Running
	I0829 19:40:03.515009   79073 system_pods.go:61] "kube-scheduler-embed-certs-920571" [ea37ea4f-390d-41d1-83b2-72fe1a09302b] Running
	I0829 19:40:03.515018   79073 system_pods.go:61] "metrics-server-6867b74b74-kb2c6" [8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:03.515027   79073 system_pods.go:61] "storage-provisioner" [741481e5-8e38-4522-a9df-4b36e6d5cf9c] Running
	I0829 19:40:03.515036   79073 system_pods.go:74] duration metric: took 6.052187ms to wait for pod list to return data ...
	I0829 19:40:03.515046   79073 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:40:03.518040   79073 default_sa.go:45] found service account: "default"
	I0829 19:40:03.518060   79073 default_sa.go:55] duration metric: took 3.004653ms for default service account to be created ...
	I0829 19:40:03.518069   79073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:40:03.523915   79073 system_pods.go:86] 9 kube-system pods found
	I0829 19:40:03.523942   79073 system_pods.go:89] "coredns-6f6b679f8f-8qrn6" [af312704-4ea9-432d-85b2-67c59231187f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:03.523949   79073 system_pods.go:89] "coredns-6f6b679f8f-9f75n" [80f3b51d-fced-4cd9-8c43-2a4eea28e470] Running
	I0829 19:40:03.523954   79073 system_pods.go:89] "etcd-embed-certs-920571" [47af52f8-1d18-41fd-b013-e53fe813e4cf] Running
	I0829 19:40:03.523958   79073 system_pods.go:89] "kube-apiserver-embed-certs-920571" [1e9c3d77-55e1-4998-a2af-430254fde431] Running
	I0829 19:40:03.523962   79073 system_pods.go:89] "kube-controller-manager-embed-certs-920571" [08b33c9f-5858-41b7-a190-f34b611203ee] Running
	I0829 19:40:03.523965   79073 system_pods.go:89] "kube-proxy-25cmq" [35ecfe58-b448-4db0-b4cc-434422ec4ca6] Running
	I0829 19:40:03.523968   79073 system_pods.go:89] "kube-scheduler-embed-certs-920571" [ea37ea4f-390d-41d1-83b2-72fe1a09302b] Running
	I0829 19:40:03.523973   79073 system_pods.go:89] "metrics-server-6867b74b74-kb2c6" [8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:03.523978   79073 system_pods.go:89] "storage-provisioner" [741481e5-8e38-4522-a9df-4b36e6d5cf9c] Running
	I0829 19:40:03.523986   79073 system_pods.go:126] duration metric: took 5.911567ms to wait for k8s-apps to be running ...
	I0829 19:40:03.523997   79073 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:40:03.524049   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:03.541502   79073 system_svc.go:56] duration metric: took 17.4955ms WaitForService to wait for kubelet
	I0829 19:40:03.541538   79073 kubeadm.go:582] duration metric: took 8.865466463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:40:03.541564   79073 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:40:03.544700   79073 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:40:03.544728   79073 node_conditions.go:123] node cpu capacity is 2
	I0829 19:40:03.544744   79073 node_conditions.go:105] duration metric: took 3.172559ms to run NodePressure ...
	I0829 19:40:03.544758   79073 start.go:241] waiting for startup goroutines ...
	I0829 19:40:03.544771   79073 start.go:246] waiting for cluster config update ...
	I0829 19:40:03.544789   79073 start.go:255] writing updated cluster config ...
	I0829 19:40:03.545136   79073 ssh_runner.go:195] Run: rm -f paused
	I0829 19:40:03.609413   79073 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:40:03.611490   79073 out.go:177] * Done! kubectl is now configured to use "embed-certs-920571" cluster and "default" namespace by default
	I0829 19:40:01.236210   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:03.236420   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:05.237141   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:06.707891   79559 kubeadm.go:310] [api-check] The API server is healthy after 5.002523987s
	I0829 19:40:06.719470   79559 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:40:06.733886   79559 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:40:06.759672   79559 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:40:06.759933   79559 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-672127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:40:06.771514   79559 kubeadm.go:310] [bootstrap-token] Using token: fzav4x.eeztheucmrep51py
	I0829 19:40:06.772887   79559 out.go:235]   - Configuring RBAC rules ...
	I0829 19:40:06.773014   79559 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:40:06.778644   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:40:06.792388   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:40:06.798560   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:40:06.801930   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:40:06.805767   79559 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:40:07.119680   79559 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:40:07.536660   79559 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:40:08.115528   79559 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:40:08.115550   79559 kubeadm.go:310] 
	I0829 19:40:08.115621   79559 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:40:08.115657   79559 kubeadm.go:310] 
	I0829 19:40:08.115780   79559 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:40:08.115802   79559 kubeadm.go:310] 
	I0829 19:40:08.115843   79559 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:40:08.115929   79559 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:40:08.116002   79559 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:40:08.116011   79559 kubeadm.go:310] 
	I0829 19:40:08.116087   79559 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:40:08.116099   79559 kubeadm.go:310] 
	I0829 19:40:08.116154   79559 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:40:08.116173   79559 kubeadm.go:310] 
	I0829 19:40:08.116247   79559 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:40:08.116386   79559 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:40:08.116477   79559 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:40:08.116487   79559 kubeadm.go:310] 
	I0829 19:40:08.116599   79559 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:40:08.116705   79559 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:40:08.116712   79559 kubeadm.go:310] 
	I0829 19:40:08.116779   79559 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token fzav4x.eeztheucmrep51py \
	I0829 19:40:08.116879   79559 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:40:08.116931   79559 kubeadm.go:310] 	--control-plane 
	I0829 19:40:08.116947   79559 kubeadm.go:310] 
	I0829 19:40:08.117048   79559 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:40:08.117058   79559 kubeadm.go:310] 
	I0829 19:40:08.117154   79559 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token fzav4x.eeztheucmrep51py \
	I0829 19:40:08.117270   79559 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:40:08.118512   79559 kubeadm.go:310] W0829 19:39:59.991394    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:08.118870   79559 kubeadm.go:310] W0829 19:39:59.992249    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:08.118981   79559 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:40:08.119009   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:40:08.119019   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:40:08.120832   79559 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:40:08.122029   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:40:08.133326   79559 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:40:08.150808   79559 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:40:08.150867   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:08.150884   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-672127 minikube.k8s.io/updated_at=2024_08_29T19_40_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=default-k8s-diff-port-672127 minikube.k8s.io/primary=true
	I0829 19:40:08.170047   79559 ops.go:34] apiserver oom_adj: -16
	I0829 19:40:08.350103   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:07.736119   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:10.236910   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:08.850762   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:09.350244   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:09.850222   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:10.350462   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:10.850237   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:11.350179   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:11.851033   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:12.351069   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:12.442963   79559 kubeadm.go:1113] duration metric: took 4.29215456s to wait for elevateKubeSystemPrivileges
	I0829 19:40:12.442998   79559 kubeadm.go:394] duration metric: took 4m56.544013459s to StartCluster
	I0829 19:40:12.443020   79559 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:40:12.443110   79559 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:40:12.444757   79559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:40:12.444998   79559 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:40:12.445061   79559 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:40:12.445138   79559 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445151   79559 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445173   79559 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.445181   79559 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:40:12.445179   79559 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-672127"
	I0829 19:40:12.445210   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.445210   79559 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:40:12.445266   79559 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445313   79559 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.445323   79559 addons.go:243] addon metrics-server should already be in state true
	I0829 19:40:12.445347   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.445625   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445658   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.445662   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445683   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.445737   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445775   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.446414   79559 out.go:177] * Verifying Kubernetes components...
	I0829 19:40:12.447652   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:40:12.461386   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0829 19:40:12.461436   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I0829 19:40:12.461805   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.461831   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.462057   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I0829 19:40:12.462324   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462327   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462341   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462347   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462373   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.462701   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.462798   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.462807   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462817   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462886   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.463109   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.463360   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.463392   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.463586   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.463607   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.465961   79559 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.465971   79559 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:40:12.465991   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.466309   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.466355   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.480989   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43705
	I0829 19:40:12.481216   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44477
	I0829 19:40:12.481407   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.481639   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.481843   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.481858   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.482222   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.482249   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.482291   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.482440   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.482576   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.482745   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.484681   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.485336   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.485664   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0829 19:40:12.486377   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.486547   79559 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:40:12.486922   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.486945   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.487310   79559 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:40:12.487586   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.488042   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:40:12.488060   79559 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:40:12.488081   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.488266   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.488307   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.488874   79559 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:40:12.488897   79559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:40:12.488914   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.492291   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.492699   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.492814   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.492844   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.493059   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.493128   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.493144   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.493259   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.493300   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.493432   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.493471   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.493822   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.493972   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.494114   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.505220   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0829 19:40:12.505690   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.506337   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.506363   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.506727   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.506899   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.508602   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.508796   79559 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:40:12.508810   79559 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:40:12.508829   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.511310   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.511660   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.511691   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.511815   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.511969   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.512110   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.512253   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.642279   79559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:40:12.666598   79559 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-672127" to be "Ready" ...
	I0829 19:40:12.682873   79559 node_ready.go:49] node "default-k8s-diff-port-672127" has status "Ready":"True"
	I0829 19:40:12.682895   79559 node_ready.go:38] duration metric: took 16.267143ms for node "default-k8s-diff-port-672127" to be "Ready" ...
	I0829 19:40:12.682904   79559 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:12.693451   79559 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:12.736525   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:40:12.736548   79559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:40:12.754764   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:40:12.754786   79559 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:40:12.806826   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:40:12.806856   79559 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:40:12.817164   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:40:12.837896   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:40:12.903140   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:40:14.124266   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.307063383s)
	I0829 19:40:14.124305   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.286373382s)
	I0829 19:40:14.124324   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124343   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124368   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124430   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.221258684s)
	I0829 19:40:14.124473   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124487   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124635   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124649   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124659   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124667   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124794   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.124813   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.124831   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124848   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124856   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124873   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124864   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124882   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124896   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124902   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124913   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124935   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.125356   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.125359   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.125381   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.126568   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.126637   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.126656   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.126704   79559 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-672127"
	I0829 19:40:14.193216   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.193238   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.193544   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.193562   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.195467   79559 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0829 19:40:12.237641   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:14.736679   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:14.196698   79559 addons.go:510] duration metric: took 1.751639165s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0829 19:40:14.720042   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:17.199482   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:17.235908   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.735901   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.199705   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.699776   79559 pod_ready.go:93] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:19.699801   79559 pod_ready.go:82] duration metric: took 7.006327617s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.699810   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.704240   79559 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:19.704261   79559 pod_ready.go:82] duration metric: took 4.444744ms for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.704269   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.710740   79559 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.710761   79559 pod_ready.go:82] duration metric: took 2.006486043s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.710770   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nqbn4" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.715111   79559 pod_ready.go:93] pod "kube-proxy-nqbn4" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.715134   79559 pod_ready.go:82] duration metric: took 4.357535ms for pod "kube-proxy-nqbn4" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.715146   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.719192   79559 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.719207   79559 pod_ready.go:82] duration metric: took 4.054087ms for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.719222   79559 pod_ready.go:39] duration metric: took 9.036299009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:21.719234   79559 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:40:21.719289   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:40:21.734507   79559 api_server.go:72] duration metric: took 9.289477227s to wait for apiserver process to appear ...
	I0829 19:40:21.734531   79559 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:40:21.734555   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:40:21.739963   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 200:
	ok
	I0829 19:40:21.740847   79559 api_server.go:141] control plane version: v1.31.0
	I0829 19:40:21.740865   79559 api_server.go:131] duration metric: took 6.327694ms to wait for apiserver health ...
	I0829 19:40:21.740872   79559 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:40:21.747609   79559 system_pods.go:59] 9 kube-system pods found
	I0829 19:40:21.747636   79559 system_pods.go:61] "coredns-6f6b679f8f-5p2vn" [8f7749c2-4cb3-4372-8144-46109f9b89b7] Running
	I0829 19:40:21.747643   79559 system_pods.go:61] "coredns-6f6b679f8f-dxbt5" [84373054-e72e-469a-bf2f-101943117851] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:21.747648   79559 system_pods.go:61] "etcd-default-k8s-diff-port-672127" [aacacabd-96ed-4635-b550-0bff36ec6c36] Running
	I0829 19:40:21.747654   79559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-672127" [6cf0f710-c339-438e-a572-2e8c498fa63a] Running
	I0829 19:40:21.747659   79559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-672127" [aaa934bc-c350-4318-9246-a2bb4b7d77f1] Running
	I0829 19:40:21.747662   79559 system_pods.go:61] "kube-proxy-nqbn4" [c5b48a1f-725b-45b7-8a3f-0df0f3371d2f] Running
	I0829 19:40:21.747665   79559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-672127" [fcf1ef19-acf4-41f2-a288-cfa8f978961b] Running
	I0829 19:40:21.747670   79559 system_pods.go:61] "metrics-server-6867b74b74-4p8qr" [8026c5c8-9f02-45a1-8cc8-9d485dc49cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:21.747674   79559 system_pods.go:61] "storage-provisioner" [5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00] Running
	I0829 19:40:21.747680   79559 system_pods.go:74] duration metric: took 6.803459ms to wait for pod list to return data ...
	I0829 19:40:21.747689   79559 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:40:21.750153   79559 default_sa.go:45] found service account: "default"
	I0829 19:40:21.750168   79559 default_sa.go:55] duration metric: took 2.474593ms for default service account to be created ...
	I0829 19:40:21.750175   79559 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:40:21.901186   79559 system_pods.go:86] 9 kube-system pods found
	I0829 19:40:21.901213   79559 system_pods.go:89] "coredns-6f6b679f8f-5p2vn" [8f7749c2-4cb3-4372-8144-46109f9b89b7] Running
	I0829 19:40:21.901219   79559 system_pods.go:89] "coredns-6f6b679f8f-dxbt5" [84373054-e72e-469a-bf2f-101943117851] Running
	I0829 19:40:21.901222   79559 system_pods.go:89] "etcd-default-k8s-diff-port-672127" [aacacabd-96ed-4635-b550-0bff36ec6c36] Running
	I0829 19:40:21.901227   79559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-672127" [6cf0f710-c339-438e-a572-2e8c498fa63a] Running
	I0829 19:40:21.901231   79559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-672127" [aaa934bc-c350-4318-9246-a2bb4b7d77f1] Running
	I0829 19:40:21.901235   79559 system_pods.go:89] "kube-proxy-nqbn4" [c5b48a1f-725b-45b7-8a3f-0df0f3371d2f] Running
	I0829 19:40:21.901238   79559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-672127" [fcf1ef19-acf4-41f2-a288-cfa8f978961b] Running
	I0829 19:40:21.901245   79559 system_pods.go:89] "metrics-server-6867b74b74-4p8qr" [8026c5c8-9f02-45a1-8cc8-9d485dc49cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:21.901249   79559 system_pods.go:89] "storage-provisioner" [5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00] Running
	I0829 19:40:21.901257   79559 system_pods.go:126] duration metric: took 151.07798ms to wait for k8s-apps to be running ...
	I0829 19:40:21.901263   79559 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:40:21.901306   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:21.916730   79559 system_svc.go:56] duration metric: took 15.457902ms WaitForService to wait for kubelet
	I0829 19:40:21.916757   79559 kubeadm.go:582] duration metric: took 9.471732105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:40:21.916773   79559 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:40:22.099083   79559 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:40:22.099119   79559 node_conditions.go:123] node cpu capacity is 2
	I0829 19:40:22.099133   79559 node_conditions.go:105] duration metric: took 182.354927ms to run NodePressure ...
	I0829 19:40:22.099147   79559 start.go:241] waiting for startup goroutines ...
	I0829 19:40:22.099156   79559 start.go:246] waiting for cluster config update ...
	I0829 19:40:22.099168   79559 start.go:255] writing updated cluster config ...
	I0829 19:40:22.099536   79559 ssh_runner.go:195] Run: rm -f paused
	I0829 19:40:22.148307   79559 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:40:22.150361   79559 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-672127" cluster and "default" namespace by default
	I0829 19:40:21.736121   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:23.229905   78865 pod_ready.go:82] duration metric: took 4m0.000141946s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" ...
	E0829 19:40:23.229943   78865 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:40:23.229991   78865 pod_ready.go:39] duration metric: took 4m10.70989222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:23.230021   78865 kubeadm.go:597] duration metric: took 4m18.600330645s to restartPrimaryControlPlane
	W0829 19:40:23.230078   78865 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:40:23.230136   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:40:25.762989   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:40:25.763689   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:25.763863   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:30.764613   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:30.764821   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:40.765517   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:40.765752   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:49.374221   78865 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.144057875s)
	I0829 19:40:49.374297   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:49.389586   78865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:40:49.399146   78865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:40:49.408450   78865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:40:49.408469   78865 kubeadm.go:157] found existing configuration files:
	
	I0829 19:40:49.408521   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:40:49.417651   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:40:49.417706   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:40:49.427073   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:40:49.435307   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:40:49.435356   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:40:49.443720   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:40:49.452437   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:40:49.452493   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:40:49.461133   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:40:49.469515   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:40:49.469564   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:40:49.478224   78865 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:40:49.523193   78865 kubeadm.go:310] W0829 19:40:49.504457    3026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:49.523801   78865 kubeadm.go:310] W0829 19:40:49.505165    3026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:49.640221   78865 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:40:57.429227   78865 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:40:57.429293   78865 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:40:57.429396   78865 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:40:57.429536   78865 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:40:57.429665   78865 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:40:57.429757   78865 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:40:57.431358   78865 out.go:235]   - Generating certificates and keys ...
	I0829 19:40:57.431434   78865 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:40:57.431485   78865 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:40:57.431566   78865 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:40:57.431640   78865 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:40:57.431711   78865 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:40:57.431786   78865 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:40:57.431847   78865 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:40:57.431893   78865 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:40:57.431956   78865 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:40:57.432013   78865 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:40:57.432052   78865 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:40:57.432109   78865 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:40:57.432186   78865 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:40:57.432275   78865 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:40:57.432352   78865 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:40:57.432444   78865 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:40:57.432518   78865 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:40:57.432595   78865 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:40:57.432648   78865 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:40:57.434057   78865 out.go:235]   - Booting up control plane ...
	I0829 19:40:57.434161   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:40:57.434245   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:40:57.434298   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:40:57.434396   78865 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:40:57.434475   78865 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:40:57.434509   78865 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:40:57.434687   78865 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:40:57.434772   78865 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:40:57.434824   78865 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 509.075612ms
	I0829 19:40:57.434887   78865 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:40:57.434932   78865 kubeadm.go:310] [api-check] The API server is healthy after 5.002117161s
	I0829 19:40:57.435094   78865 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:40:57.435232   78865 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:40:57.435284   78865 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:40:57.435429   78865 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-690795 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:40:57.435472   78865 kubeadm.go:310] [bootstrap-token] Using token: adxyev.rcmf9k5ok190h0g1
	I0829 19:40:57.436846   78865 out.go:235]   - Configuring RBAC rules ...
	I0829 19:40:57.436936   78865 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:40:57.437001   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:40:57.437113   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:40:57.437214   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:40:57.437307   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:40:57.437380   78865 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:40:57.437480   78865 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:40:57.437528   78865 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:40:57.437577   78865 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:40:57.437583   78865 kubeadm.go:310] 
	I0829 19:40:57.437635   78865 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:40:57.437641   78865 kubeadm.go:310] 
	I0829 19:40:57.437704   78865 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:40:57.437710   78865 kubeadm.go:310] 
	I0829 19:40:57.437744   78865 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:40:57.437807   78865 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:40:57.437851   78865 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:40:57.437857   78865 kubeadm.go:310] 
	I0829 19:40:57.437907   78865 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:40:57.437913   78865 kubeadm.go:310] 
	I0829 19:40:57.437951   78865 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:40:57.437957   78865 kubeadm.go:310] 
	I0829 19:40:57.438000   78865 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:40:57.438107   78865 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:40:57.438188   78865 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:40:57.438200   78865 kubeadm.go:310] 
	I0829 19:40:57.438289   78865 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:40:57.438359   78865 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:40:57.438364   78865 kubeadm.go:310] 
	I0829 19:40:57.438429   78865 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token adxyev.rcmf9k5ok190h0g1 \
	I0829 19:40:57.438507   78865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:40:57.438525   78865 kubeadm.go:310] 	--control-plane 
	I0829 19:40:57.438534   78865 kubeadm.go:310] 
	I0829 19:40:57.438611   78865 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:40:57.438621   78865 kubeadm.go:310] 
	I0829 19:40:57.438688   78865 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token adxyev.rcmf9k5ok190h0g1 \
	I0829 19:40:57.438791   78865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:40:57.438814   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:40:57.438825   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:40:57.440836   78865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:40:57.442065   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:40:57.452700   78865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:40:57.469549   78865 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:40:57.469621   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:57.469656   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-690795 minikube.k8s.io/updated_at=2024_08_29T19_40_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=no-preload-690795 minikube.k8s.io/primary=true
	I0829 19:40:57.503411   78865 ops.go:34] apiserver oom_adj: -16
	I0829 19:40:57.648807   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:58.149067   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:58.649770   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:59.148932   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:59.649114   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:00.149833   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:00.649474   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.149795   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.649154   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.745084   78865 kubeadm.go:1113] duration metric: took 4.275525047s to wait for elevateKubeSystemPrivileges
	I0829 19:41:01.745117   78865 kubeadm.go:394] duration metric: took 4m57.169926854s to StartCluster
	I0829 19:41:01.745134   78865 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:41:01.745209   78865 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:41:01.746775   78865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:41:01.747005   78865 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:41:01.747062   78865 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:41:01.747155   78865 addons.go:69] Setting storage-provisioner=true in profile "no-preload-690795"
	I0829 19:41:01.747175   78865 addons.go:69] Setting default-storageclass=true in profile "no-preload-690795"
	I0829 19:41:01.747189   78865 addons.go:234] Setting addon storage-provisioner=true in "no-preload-690795"
	W0829 19:41:01.747199   78865 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:41:01.747200   78865 addons.go:69] Setting metrics-server=true in profile "no-preload-690795"
	I0829 19:41:01.747240   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.747246   78865 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:41:01.747243   78865 addons.go:234] Setting addon metrics-server=true in "no-preload-690795"
	W0829 19:41:01.747307   78865 addons.go:243] addon metrics-server should already be in state true
	I0829 19:41:01.747333   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.747206   78865 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-690795"
	I0829 19:41:01.747652   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747670   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747678   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.747702   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.747780   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747810   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.748790   78865 out.go:177] * Verifying Kubernetes components...
	I0829 19:41:01.750069   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:41:01.764006   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0829 19:41:01.765511   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.766194   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.766218   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.766287   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43301
	I0829 19:41:01.766670   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.766694   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.766912   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.766965   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I0829 19:41:01.767129   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.767149   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.767304   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.767506   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.767737   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.767755   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.768073   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.768202   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.768241   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.768615   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.768646   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.771065   78865 addons.go:234] Setting addon default-storageclass=true in "no-preload-690795"
	W0829 19:41:01.771088   78865 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:41:01.771117   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.771415   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.771441   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.787271   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I0829 19:41:01.788003   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.788577   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.788606   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.788885   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0829 19:41:01.789065   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0829 19:41:01.789073   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.789361   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.789716   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.789754   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.789754   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.789774   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.790084   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.790243   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.790319   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.791018   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.791029   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.791393   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.791721   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.792306   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.793557   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.793806   78865 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:41:01.794942   78865 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:41:01.795033   78865 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:41:01.795049   78865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:41:01.795067   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.796032   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:41:01.796048   78865 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:41:01.796065   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.799646   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800163   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800618   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.800644   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800826   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.800843   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800941   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.801043   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.801114   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.801184   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.801239   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.801363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.801367   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.801484   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.807187   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36385
	I0829 19:41:01.807604   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.808056   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.808070   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.808471   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.808671   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.810374   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.810569   78865 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:41:01.810579   78865 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:41:01.810591   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.813314   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.813766   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.813776   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.814029   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.814187   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.814292   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.814379   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.963011   78865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:41:01.981935   78865 node_ready.go:35] waiting up to 6m0s for node "no-preload-690795" to be "Ready" ...
	I0829 19:41:01.998366   78865 node_ready.go:49] node "no-preload-690795" has status "Ready":"True"
	I0829 19:41:01.998389   78865 node_ready.go:38] duration metric: took 16.418591ms for node "no-preload-690795" to be "Ready" ...
	I0829 19:41:01.998398   78865 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:41:02.005811   78865 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:02.053495   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:41:02.197657   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:41:02.239853   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:41:02.239877   78865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:41:02.270764   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:41:02.270789   78865 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:41:02.327819   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:41:02.327853   78865 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:41:02.380812   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.380843   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.381117   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.381191   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:02.381209   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.381217   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.381432   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.381444   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:02.384211   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:41:02.387013   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.387027   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.387286   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:02.387333   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.387345   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.027502   78865 pod_ready.go:93] pod "etcd-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:03.027535   78865 pod_ready.go:82] duration metric: took 1.02170157s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:03.027550   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:03.410428   78865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212715771s)
	I0829 19:41:03.410485   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.410503   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.412586   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.412590   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.412614   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.412625   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.412632   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.412926   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.412947   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.412954   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.587379   78865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.203116606s)
	I0829 19:41:03.587437   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.587452   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.587770   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.587840   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.587859   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.587874   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.587878   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.588185   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.588206   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.588218   78865 addons.go:475] Verifying addon metrics-server=true in "no-preload-690795"
	I0829 19:41:03.588192   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.590131   78865 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 19:41:00.767158   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:00.767429   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:03.591280   78865 addons.go:510] duration metric: took 1.844219817s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 19:41:05.035315   78865 pod_ready.go:103] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"False"
	I0829 19:41:06.033037   78865 pod_ready.go:93] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:06.033060   78865 pod_ready.go:82] duration metric: took 3.005501862s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:06.033068   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.039035   78865 pod_ready.go:93] pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.039059   78865 pod_ready.go:82] duration metric: took 1.005984859s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.039068   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p7zvh" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.043096   78865 pod_ready.go:93] pod "kube-proxy-p7zvh" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.043116   78865 pod_ready.go:82] duration metric: took 4.042896ms for pod "kube-proxy-p7zvh" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.043125   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.046934   78865 pod_ready.go:93] pod "kube-scheduler-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.046957   78865 pod_ready.go:82] duration metric: took 3.826283ms for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.046966   78865 pod_ready.go:39] duration metric: took 5.048560252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:41:07.046983   78865 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:41:07.047036   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:41:07.062234   78865 api_server.go:72] duration metric: took 5.315200823s to wait for apiserver process to appear ...
	I0829 19:41:07.062256   78865 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:41:07.062277   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:41:07.068022   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0829 19:41:07.069170   78865 api_server.go:141] control plane version: v1.31.0
	I0829 19:41:07.069190   78865 api_server.go:131] duration metric: took 6.927858ms to wait for apiserver health ...
	I0829 19:41:07.069198   78865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:41:07.075909   78865 system_pods.go:59] 9 kube-system pods found
	I0829 19:41:07.075932   78865 system_pods.go:61] "coredns-6f6b679f8f-wr7bq" [ac054ab5-3a0e-433e-add6-5817ce6f1c27] Running
	I0829 19:41:07.075939   78865 system_pods.go:61] "coredns-6f6b679f8f-xbfb6" [a94d281f-1fdb-4e33-a060-17cd5981462c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:41:07.075944   78865 system_pods.go:61] "etcd-no-preload-690795" [a13c0d5b-fb99-4c57-8401-b0580a0e97a7] Running
	I0829 19:41:07.075949   78865 system_pods.go:61] "kube-apiserver-no-preload-690795" [1ec66a25-5b6d-4b81-8c44-b4dffad1ec12] Running
	I0829 19:41:07.075953   78865 system_pods.go:61] "kube-controller-manager-no-preload-690795" [d670e6eb-006f-4fce-a72f-f266fda72ccc] Running
	I0829 19:41:07.075956   78865 system_pods.go:61] "kube-proxy-p7zvh" [14f4576d-3d3e-4848-9350-3348293318aa] Running
	I0829 19:41:07.075960   78865 system_pods.go:61] "kube-scheduler-no-preload-690795" [1b4dfa7f-b043-4a38-a250-2e51eabf1b33] Running
	I0829 19:41:07.075964   78865 system_pods.go:61] "metrics-server-6867b74b74-shs88" [cd53f408-7f8a-40ae-93f3-7a00c8ae6646] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:41:07.075968   78865 system_pods.go:61] "storage-provisioner" [df10c563-06d8-48f8-a6e4-35837195a25d] Running
	I0829 19:41:07.075975   78865 system_pods.go:74] duration metric: took 6.771333ms to wait for pod list to return data ...
	I0829 19:41:07.075985   78865 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:41:07.079235   78865 default_sa.go:45] found service account: "default"
	I0829 19:41:07.079255   78865 default_sa.go:55] duration metric: took 3.264804ms for default service account to be created ...
	I0829 19:41:07.079263   78865 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:41:07.083981   78865 system_pods.go:86] 9 kube-system pods found
	I0829 19:41:07.084006   78865 system_pods.go:89] "coredns-6f6b679f8f-wr7bq" [ac054ab5-3a0e-433e-add6-5817ce6f1c27] Running
	I0829 19:41:07.084014   78865 system_pods.go:89] "coredns-6f6b679f8f-xbfb6" [a94d281f-1fdb-4e33-a060-17cd5981462c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:41:07.084019   78865 system_pods.go:89] "etcd-no-preload-690795" [a13c0d5b-fb99-4c57-8401-b0580a0e97a7] Running
	I0829 19:41:07.084025   78865 system_pods.go:89] "kube-apiserver-no-preload-690795" [1ec66a25-5b6d-4b81-8c44-b4dffad1ec12] Running
	I0829 19:41:07.084029   78865 system_pods.go:89] "kube-controller-manager-no-preload-690795" [d670e6eb-006f-4fce-a72f-f266fda72ccc] Running
	I0829 19:41:07.084032   78865 system_pods.go:89] "kube-proxy-p7zvh" [14f4576d-3d3e-4848-9350-3348293318aa] Running
	I0829 19:41:07.084037   78865 system_pods.go:89] "kube-scheduler-no-preload-690795" [1b4dfa7f-b043-4a38-a250-2e51eabf1b33] Running
	I0829 19:41:07.084042   78865 system_pods.go:89] "metrics-server-6867b74b74-shs88" [cd53f408-7f8a-40ae-93f3-7a00c8ae6646] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:41:07.084045   78865 system_pods.go:89] "storage-provisioner" [df10c563-06d8-48f8-a6e4-35837195a25d] Running
	I0829 19:41:07.084052   78865 system_pods.go:126] duration metric: took 4.784448ms to wait for k8s-apps to be running ...
	I0829 19:41:07.084062   78865 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:41:07.084104   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:07.098513   78865 system_svc.go:56] duration metric: took 14.440998ms WaitForService to wait for kubelet
	I0829 19:41:07.098551   78865 kubeadm.go:582] duration metric: took 5.351518255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:41:07.098574   78865 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:41:07.231160   78865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:41:07.231189   78865 node_conditions.go:123] node cpu capacity is 2
	I0829 19:41:07.231200   78865 node_conditions.go:105] duration metric: took 132.62068ms to run NodePressure ...
	I0829 19:41:07.231209   78865 start.go:241] waiting for startup goroutines ...
	I0829 19:41:07.231216   78865 start.go:246] waiting for cluster config update ...
	I0829 19:41:07.231225   78865 start.go:255] writing updated cluster config ...
	I0829 19:41:07.231503   78865 ssh_runner.go:195] Run: rm -f paused
	I0829 19:41:07.283204   78865 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:41:07.284751   78865 out.go:177] * Done! kubectl is now configured to use "no-preload-690795" cluster and "default" namespace by default
	I0829 19:41:40.770350   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:40.770652   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:40.770684   79869 kubeadm.go:310] 
	I0829 19:41:40.770740   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:41:40.770802   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:41:40.770818   79869 kubeadm.go:310] 
	I0829 19:41:40.770862   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:41:40.770917   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:41:40.771047   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:41:40.771057   79869 kubeadm.go:310] 
	I0829 19:41:40.771202   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:41:40.771254   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:41:40.771309   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:41:40.771320   79869 kubeadm.go:310] 
	I0829 19:41:40.771447   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:41:40.771565   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:41:40.771576   79869 kubeadm.go:310] 
	I0829 19:41:40.771675   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:41:40.771776   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:41:40.771900   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:41:40.771997   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:41:40.772010   79869 kubeadm.go:310] 
	I0829 19:41:40.772984   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:41:40.773093   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:41:40.773213   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 19:41:40.773353   79869 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 19:41:40.773398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:41:41.224263   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:41.239310   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:41:41.249121   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:41:41.249142   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:41:41.249195   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:41:41.258534   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:41:41.258591   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:41:41.267814   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:41:41.276813   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:41:41.276871   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:41:41.286937   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.296364   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:41:41.296435   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.306574   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:41:41.315824   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:41:41.315899   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:41:41.325290   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:41:41.389915   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:41:41.390071   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:41:41.529956   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:41:41.530108   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:41:41.530226   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:41:41.709310   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:41:41.711945   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:41:41.712051   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:41:41.712127   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:41:41.712225   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:41:41.712308   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:41:41.712402   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:41:41.712466   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:41:41.712551   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:41:41.712622   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:41:41.712727   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:41:41.712831   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:41:41.712865   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:41:41.712912   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:41:41.790778   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:41:41.993240   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:41:42.180389   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:41:42.248561   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:41:42.272297   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:41:42.273147   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:41:42.273249   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:41:42.421783   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:41:42.424669   79869 out.go:235]   - Booting up control plane ...
	I0829 19:41:42.424781   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:41:42.434145   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:41:42.437026   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:41:42.437823   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:41:42.441047   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:42:22.439545   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:42:22.439898   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:22.440093   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:27.439985   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:27.440226   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:37.440067   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:37.440333   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:57.439710   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:57.439891   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.439862   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:43:37.440057   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.440081   79869 kubeadm.go:310] 
	I0829 19:43:37.440118   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:43:37.440173   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:43:37.440181   79869 kubeadm.go:310] 
	I0829 19:43:37.440213   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:43:37.440265   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:43:37.440376   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:43:37.440384   79869 kubeadm.go:310] 
	I0829 19:43:37.440503   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:43:37.440551   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:43:37.440605   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:43:37.440618   79869 kubeadm.go:310] 
	I0829 19:43:37.440763   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:43:37.440893   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:43:37.440904   79869 kubeadm.go:310] 
	I0829 19:43:37.441013   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:43:37.441146   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:43:37.441255   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:43:37.441367   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:43:37.441380   79869 kubeadm.go:310] 
	I0829 19:43:37.441848   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:43:37.441958   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:43:37.442039   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 19:43:37.442126   79869 kubeadm.go:394] duration metric: took 8m1.388269811s to StartCluster
	I0829 19:43:37.442174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:43:37.442230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:43:37.483512   79869 cri.go:89] found id: ""
	I0829 19:43:37.483544   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.483554   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:43:37.483560   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:43:37.483617   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:43:37.518325   79869 cri.go:89] found id: ""
	I0829 19:43:37.518353   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.518361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:43:37.518368   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:43:37.518426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:43:37.554541   79869 cri.go:89] found id: ""
	I0829 19:43:37.554563   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.554574   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:43:37.554582   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:43:37.554650   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:43:37.589041   79869 cri.go:89] found id: ""
	I0829 19:43:37.589069   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.589076   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:43:37.589083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:43:37.589132   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:43:37.624451   79869 cri.go:89] found id: ""
	I0829 19:43:37.624479   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.624491   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:43:37.624499   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:43:37.624554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:43:37.660162   79869 cri.go:89] found id: ""
	I0829 19:43:37.660186   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.660193   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:43:37.660199   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:43:37.660249   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:43:37.696806   79869 cri.go:89] found id: ""
	I0829 19:43:37.696836   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.696844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:43:37.696850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:43:37.696898   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:43:37.732828   79869 cri.go:89] found id: ""
	I0829 19:43:37.732851   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.732860   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:43:37.732871   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:43:37.732887   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:43:37.772219   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:43:37.772247   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:43:37.823967   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:43:37.824003   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:43:37.838884   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:43:37.838906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:43:37.915184   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:43:37.915206   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:43:37.915222   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0829 19:43:38.020759   79869 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 19:43:38.020827   79869 out.go:270] * 
	W0829 19:43:38.020882   79869 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.020897   79869 out.go:270] * 
	W0829 19:43:38.021777   79869 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:43:38.024855   79869 out.go:201] 
	W0829 19:43:38.025860   79869 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.025905   79869 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 19:43:38.025936   79869 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 19:43:38.027175   79869 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.809043907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960619809016863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96ec167a-bc67-4049-a2d0-ae8ef6030a82 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.809835240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bda9025-a132-467d-9353-49e07f7b2363 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.809908296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bda9025-a132-467d-9353-49e07f7b2363 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.809941365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8bda9025-a132-467d-9353-49e07f7b2363 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.842564245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86c801a2-6db9-4141-b6b9-455e33552b7c name=/runtime.v1.RuntimeService/Version
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.842639905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86c801a2-6db9-4141-b6b9-455e33552b7c name=/runtime.v1.RuntimeService/Version
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.845110356Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=178976a5-aedc-47c9-90fd-54ae5ab7bc93 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.845477019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960619845453952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=178976a5-aedc-47c9-90fd-54ae5ab7bc93 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.846070128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fce54cd1-7309-4654-9ee0-5d7cbd554815 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.846140693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fce54cd1-7309-4654-9ee0-5d7cbd554815 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.846175914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fce54cd1-7309-4654-9ee0-5d7cbd554815 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.877104236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b17ef71-f199-46f6-8268-765f24217859 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.877192527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b17ef71-f199-46f6-8268-765f24217859 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.878468907Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d06edbc-60c2-4a9c-a3a3-9b141ac17b73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.878892456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960619878869503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d06edbc-60c2-4a9c-a3a3-9b141ac17b73 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.879573825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0cfad5d-8d38-4142-9aa0-35baa431823c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.879636515Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0cfad5d-8d38-4142-9aa0-35baa431823c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.879684625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f0cfad5d-8d38-4142-9aa0-35baa431823c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.910953107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afda4c41-a33d-481a-b515-9fcc27082b2b name=/runtime.v1.RuntimeService/Version
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.911049018Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afda4c41-a33d-481a-b515-9fcc27082b2b name=/runtime.v1.RuntimeService/Version
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.911829326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b0689f3-0fbe-42a3-935c-cde3206b1e37 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.912202845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960619912180694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b0689f3-0fbe-42a3-935c-cde3206b1e37 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.912665844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8418dd38-37e5-4512-97b8-d12e9e759985 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.912725398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8418dd38-37e5-4512-97b8-d12e9e759985 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:43:39 old-k8s-version-467349 crio[629]: time="2024-08-29 19:43:39.912761327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8418dd38-37e5-4512-97b8-d12e9e759985 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug29 19:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052596] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039104] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.969920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.984718] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.595405] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.892866] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.060569] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055946] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.216571] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.121311] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.242095] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.546376] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.055907] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.984348] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[ +14.158991] kauditd_printk_skb: 46 callbacks suppressed
	[Aug29 19:39] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[Aug29 19:41] systemd-fstab-generator[5395]: Ignoring "noauto" option for root device
	[  +0.067610] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:43:40 up 8 min,  0 users,  load average: 0.00, 0.06, 0.04
	Linux old-k8s-version-467349 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc0008b8990)
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]: goroutine 152 [select]:
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000977ef0, 0x4f0ac20, 0xc000aff860, 0x1, 0xc00009e0c0)
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000148700, 0xc00009e0c0)
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00075c8e0, 0xc0008ced80)
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 29 19:43:37 old-k8s-version-467349 kubelet[5575]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 29 19:43:37 old-k8s-version-467349 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 29 19:43:37 old-k8s-version-467349 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 29 19:43:38 old-k8s-version-467349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 29 19:43:38 old-k8s-version-467349 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 29 19:43:38 old-k8s-version-467349 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 29 19:43:38 old-k8s-version-467349 kubelet[5642]: I0829 19:43:38.145840    5642 server.go:416] Version: v1.20.0
	Aug 29 19:43:38 old-k8s-version-467349 kubelet[5642]: I0829 19:43:38.146229    5642 server.go:837] Client rotation is on, will bootstrap in background
	Aug 29 19:43:38 old-k8s-version-467349 kubelet[5642]: I0829 19:43:38.148460    5642 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 29 19:43:38 old-k8s-version-467349 kubelet[5642]: I0829 19:43:38.150251    5642 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 29 19:43:38 old-k8s-version-467349 kubelet[5642]: W0829 19:43:38.150495    5642 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467349 -n old-k8s-version-467349
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 2 (216.509561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-467349" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (702.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0829 19:40:12.474671   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-920571 -n embed-certs-920571
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-29 19:49:04.145746472 +0000 UTC m=+6216.866539256
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-920571 -n embed-certs-920571
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-920571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-920571 logs -n 25: (1.997054101s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-633326 sudo cat                              | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo find                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo crio                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-633326                                       | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	| delete  | -p                                                     | disable-driver-mounts-831934 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | disable-driver-mounts-831934                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:28 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-690795             | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-920571            | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-672127  | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC | 29 Aug 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC |                     |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-690795                  | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC | 29 Aug 24 19:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-920571                 | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC | 29 Aug 24 19:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467349        | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-672127       | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:40 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467349             | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:31:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:31:58.737382   79869 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:31:58.737475   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737483   79869 out.go:358] Setting ErrFile to fd 2...
	I0829 19:31:58.737486   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737664   79869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:31:58.738216   79869 out.go:352] Setting JSON to false
	I0829 19:31:58.739096   79869 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8066,"bootTime":1724951853,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:31:58.739164   79869 start.go:139] virtualization: kvm guest
	I0829 19:31:58.741047   79869 out.go:177] * [old-k8s-version-467349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:31:58.742202   79869 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:31:58.742202   79869 notify.go:220] Checking for updates...
	I0829 19:31:58.744035   79869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:31:58.745212   79869 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:31:58.746330   79869 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:31:58.747599   79869 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:31:58.748625   79869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:31:58.749897   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:31:58.750353   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.750402   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.765128   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I0829 19:31:58.765502   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.765933   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.765952   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.766302   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.766478   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.768195   79869 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0829 19:31:58.769230   79869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:31:58.769562   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.769599   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.783969   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42955
	I0829 19:31:58.784329   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.784794   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.784809   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.785130   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.785335   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.821467   79869 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:31:58.822695   79869 start.go:297] selected driver: kvm2
	I0829 19:31:58.822708   79869 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.822845   79869 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:31:58.823799   79869 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.823887   79869 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:31:58.839098   79869 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:31:58.839445   79869 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:31:58.839504   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:31:58.839519   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:31:58.839556   79869 start.go:340] cluster config:
	{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.839650   79869 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.841263   79869 out.go:177] * Starting "old-k8s-version-467349" primary control-plane node in "old-k8s-version-467349" cluster
	I0829 19:31:58.842265   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:31:58.842301   79869 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:31:58.842310   79869 cache.go:56] Caching tarball of preloaded images
	I0829 19:31:58.842386   79869 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:31:58.842396   79869 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0829 19:31:58.842476   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:31:58.842637   79869 start.go:360] acquireMachinesLock for old-k8s-version-467349: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:32:00.606343   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:03.678411   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:09.758354   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:12.830416   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:18.910387   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:21.982407   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:28.062408   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:31.134407   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:37.214369   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:40.286345   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:46.366360   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:49.438406   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:55.518437   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:58.590377   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:04.670397   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:07.742436   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:13.822348   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:16.894422   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:22.974353   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:26.046337   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:32.126325   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:35.198391   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:41.278353   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:44.350421   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:50.434297   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:53.502296   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:59.582448   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:02.654443   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:08.734358   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:11.806435   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:17.886372   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:20.958351   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:27.038356   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:30.110387   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:33.114600   79073 start.go:364] duration metric: took 4m24.136110592s to acquireMachinesLock for "embed-certs-920571"
	I0829 19:34:33.114658   79073 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:34:33.114666   79073 fix.go:54] fixHost starting: 
	I0829 19:34:33.115014   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:33.115043   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:33.130652   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34641
	I0829 19:34:33.131096   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:33.131536   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:34:33.131555   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:33.131871   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:33.132060   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:33.132217   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:34:33.133784   79073 fix.go:112] recreateIfNeeded on embed-certs-920571: state=Stopped err=<nil>
	I0829 19:34:33.133809   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	W0829 19:34:33.133951   79073 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:34:33.135573   79073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-920571" ...
	I0829 19:34:33.136726   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Start
	I0829 19:34:33.136873   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring networks are active...
	I0829 19:34:33.137613   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring network default is active
	I0829 19:34:33.137909   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring network mk-embed-certs-920571 is active
	I0829 19:34:33.138400   79073 main.go:141] libmachine: (embed-certs-920571) Getting domain xml...
	I0829 19:34:33.139091   79073 main.go:141] libmachine: (embed-certs-920571) Creating domain...
	I0829 19:34:33.112327   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:34:33.112363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:34:33.112705   78865 buildroot.go:166] provisioning hostname "no-preload-690795"
	I0829 19:34:33.112736   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:34:33.112943   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:34:33.114457   78865 machine.go:96] duration metric: took 4m37.430735456s to provisionDockerMachine
	I0829 19:34:33.114505   78865 fix.go:56] duration metric: took 4m37.452542806s for fixHost
	I0829 19:34:33.114516   78865 start.go:83] releasing machines lock for "no-preload-690795", held for 4m37.452590646s
	W0829 19:34:33.114545   78865 start.go:714] error starting host: provision: host is not running
	W0829 19:34:33.114637   78865 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 19:34:33.114647   78865 start.go:729] Will try again in 5 seconds ...
	I0829 19:34:34.366249   79073 main.go:141] libmachine: (embed-certs-920571) Waiting to get IP...
	I0829 19:34:34.367233   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:34.367595   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:34.367671   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:34.367580   80412 retry.go:31] will retry after 294.1031ms: waiting for machine to come up
	I0829 19:34:34.663229   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:34.663677   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:34.663709   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:34.663624   80412 retry.go:31] will retry after 345.352879ms: waiting for machine to come up
	I0829 19:34:35.010102   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.010576   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.010604   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.010527   80412 retry.go:31] will retry after 295.49024ms: waiting for machine to come up
	I0829 19:34:35.308077   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.308580   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.308608   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.308525   80412 retry.go:31] will retry after 575.095429ms: waiting for machine to come up
	I0829 19:34:35.885400   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.885806   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.885835   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.885762   80412 retry.go:31] will retry after 524.463725ms: waiting for machine to come up
	I0829 19:34:36.411496   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:36.411840   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:36.411866   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:36.411802   80412 retry.go:31] will retry after 672.277111ms: waiting for machine to come up
	I0829 19:34:37.085978   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:37.086512   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:37.086537   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:37.086473   80412 retry.go:31] will retry after 1.185875442s: waiting for machine to come up
	I0829 19:34:38.274401   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:38.274881   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:38.274914   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:38.274827   80412 retry.go:31] will retry after 1.426721352s: waiting for machine to come up
	I0829 19:34:38.116486   78865 start.go:360] acquireMachinesLock for no-preload-690795: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:34:39.703333   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:39.703732   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:39.703756   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:39.703691   80412 retry.go:31] will retry after 1.500429564s: waiting for machine to come up
	I0829 19:34:41.206311   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:41.206854   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:41.206882   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:41.206766   80412 retry.go:31] will retry after 2.021866027s: waiting for machine to come up
	I0829 19:34:43.230915   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:43.231329   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:43.231382   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:43.231318   80412 retry.go:31] will retry after 2.415112477s: waiting for machine to come up
	I0829 19:34:45.649815   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:45.650169   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:45.650221   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:45.650140   80412 retry.go:31] will retry after 3.292956483s: waiting for machine to come up
	I0829 19:34:50.094786   79559 start.go:364] duration metric: took 3m31.488453615s to acquireMachinesLock for "default-k8s-diff-port-672127"
	I0829 19:34:50.094847   79559 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:34:50.094857   79559 fix.go:54] fixHost starting: 
	I0829 19:34:50.095330   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:50.095367   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:50.112044   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0829 19:34:50.112510   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:50.112941   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:34:50.112964   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:50.113325   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:50.113522   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:34:50.113663   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:34:50.115335   79559 fix.go:112] recreateIfNeeded on default-k8s-diff-port-672127: state=Stopped err=<nil>
	I0829 19:34:50.115378   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	W0829 19:34:50.115548   79559 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:34:50.117176   79559 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-672127" ...
	I0829 19:34:48.944274   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.944748   79073 main.go:141] libmachine: (embed-certs-920571) Found IP for machine: 192.168.61.243
	I0829 19:34:48.944776   79073 main.go:141] libmachine: (embed-certs-920571) Reserving static IP address...
	I0829 19:34:48.944793   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has current primary IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.945167   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "embed-certs-920571", mac: "52:54:00:35:28:22", ip: "192.168.61.243"} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:48.945195   79073 main.go:141] libmachine: (embed-certs-920571) Reserved static IP address: 192.168.61.243
	I0829 19:34:48.945214   79073 main.go:141] libmachine: (embed-certs-920571) DBG | skip adding static IP to network mk-embed-certs-920571 - found existing host DHCP lease matching {name: "embed-certs-920571", mac: "52:54:00:35:28:22", ip: "192.168.61.243"}
	I0829 19:34:48.945225   79073 main.go:141] libmachine: (embed-certs-920571) Waiting for SSH to be available...
	I0829 19:34:48.945236   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Getting to WaitForSSH function...
	I0829 19:34:48.947646   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.948004   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:48.948034   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.948132   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Using SSH client type: external
	I0829 19:34:48.948162   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa (-rw-------)
	I0829 19:34:48.948280   79073 main.go:141] libmachine: (embed-certs-920571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:34:48.948307   79073 main.go:141] libmachine: (embed-certs-920571) DBG | About to run SSH command:
	I0829 19:34:48.948328   79073 main.go:141] libmachine: (embed-certs-920571) DBG | exit 0
	I0829 19:34:49.073781   79073 main.go:141] libmachine: (embed-certs-920571) DBG | SSH cmd err, output: <nil>: 
	I0829 19:34:49.074184   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetConfigRaw
	I0829 19:34:49.074813   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:49.077014   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.077349   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.077369   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.077550   79073 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/config.json ...
	I0829 19:34:49.077724   79073 machine.go:93] provisionDockerMachine start ...
	I0829 19:34:49.077739   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:49.077936   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.080112   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.080448   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.080472   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.080548   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.080715   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.080853   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.080983   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.081110   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.081294   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.081306   79073 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:34:49.182232   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:34:49.182282   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.182556   79073 buildroot.go:166] provisioning hostname "embed-certs-920571"
	I0829 19:34:49.182582   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.182783   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.185368   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.185727   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.185751   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.185901   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.186077   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.186237   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.186379   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.186505   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.186721   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.186740   79073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-920571 && echo "embed-certs-920571" | sudo tee /etc/hostname
	I0829 19:34:49.300225   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-920571
	
	I0829 19:34:49.300261   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.303129   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.303497   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.303528   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.303682   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.303883   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.304061   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.304193   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.304466   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.304650   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.304667   79073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-920571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-920571/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-920571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:34:49.413678   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:34:49.413710   79073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:34:49.413765   79073 buildroot.go:174] setting up certificates
	I0829 19:34:49.413774   79073 provision.go:84] configureAuth start
	I0829 19:34:49.413786   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.414069   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:49.416618   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.416965   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.416993   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.417143   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.419308   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.419585   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.419630   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.419746   79073 provision.go:143] copyHostCerts
	I0829 19:34:49.419802   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:34:49.419820   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:34:49.419882   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:34:49.419973   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:34:49.419981   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:34:49.420005   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:34:49.420055   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:34:49.420063   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:34:49.420083   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:34:49.420129   79073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.embed-certs-920571 san=[127.0.0.1 192.168.61.243 embed-certs-920571 localhost minikube]
	I0829 19:34:49.488345   79073 provision.go:177] copyRemoteCerts
	I0829 19:34:49.488396   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:34:49.488418   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.490954   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.491290   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.491328   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.491473   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.491667   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.491794   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.491932   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:49.571847   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:34:49.594401   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 19:34:49.615988   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:34:49.638030   79073 provision.go:87] duration metric: took 224.241128ms to configureAuth
	I0829 19:34:49.638058   79073 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:34:49.638251   79073 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:34:49.638342   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.640876   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.641214   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.641244   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.641439   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.641662   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.641818   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.641941   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.642126   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.642292   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.642307   79073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:34:49.862247   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:34:49.862276   79073 machine.go:96] duration metric: took 784.541058ms to provisionDockerMachine
	I0829 19:34:49.862286   79073 start.go:293] postStartSetup for "embed-certs-920571" (driver="kvm2")
	I0829 19:34:49.862296   79073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:34:49.862325   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:49.862632   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:34:49.862660   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.865463   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.865871   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.865899   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.866068   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.866285   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.866459   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.866644   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:49.948826   79073 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:34:49.952779   79073 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:34:49.952800   79073 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:34:49.952858   79073 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:34:49.952935   79073 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:34:49.953034   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:34:49.962083   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:34:49.986910   79073 start.go:296] duration metric: took 124.612025ms for postStartSetup
	I0829 19:34:49.986944   79073 fix.go:56] duration metric: took 16.872279139s for fixHost
	I0829 19:34:49.986964   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.989581   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.989919   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.989946   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.990080   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.990281   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.990519   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.990662   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.990835   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.991009   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.991020   79073 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:34:50.094598   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960090.067799977
	
	I0829 19:34:50.094618   79073 fix.go:216] guest clock: 1724960090.067799977
	I0829 19:34:50.094626   79073 fix.go:229] Guest: 2024-08-29 19:34:50.067799977 +0000 UTC Remote: 2024-08-29 19:34:49.98694779 +0000 UTC m=+281.148944887 (delta=80.852187ms)
	I0829 19:34:50.094667   79073 fix.go:200] guest clock delta is within tolerance: 80.852187ms
	I0829 19:34:50.094672   79073 start.go:83] releasing machines lock for "embed-certs-920571", held for 16.98003549s
	I0829 19:34:50.094697   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.094962   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:50.097867   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.098301   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.098331   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.098494   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099007   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099190   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099276   79073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:34:50.099322   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:50.099429   79073 ssh_runner.go:195] Run: cat /version.json
	I0829 19:34:50.099453   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:50.101909   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.101932   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102283   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.102311   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102342   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.102363   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102460   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:50.102556   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:50.102647   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:50.102717   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:50.102818   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:50.102899   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:50.102964   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:50.103032   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:50.178744   79073 ssh_runner.go:195] Run: systemctl --version
	I0829 19:34:50.220024   79073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:34:50.370308   79073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:34:50.379363   79073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:34:50.379435   79073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:34:50.394787   79073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:34:50.394810   79073 start.go:495] detecting cgroup driver to use...
	I0829 19:34:50.394886   79073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:34:50.410061   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:34:50.423846   79073 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:34:50.423910   79073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:34:50.437117   79073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:34:50.450318   79073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:34:50.563588   79073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:34:50.706261   79073 docker.go:233] disabling docker service ...
	I0829 19:34:50.706356   79073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:34:50.721443   79073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:34:50.734284   79073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:34:50.871611   79073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:34:51.006487   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:34:51.019543   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:34:51.036398   79073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:34:51.036444   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.045884   79073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:34:51.045931   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.055634   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.065379   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.075104   79073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:34:51.085560   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.095777   79073 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.114679   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.125695   79073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:34:51.135263   79073 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:34:51.135328   79073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:34:51.148534   79073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:34:51.158658   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:34:51.281185   79073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:34:51.378558   79073 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:34:51.378618   79073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:34:51.383580   79073 start.go:563] Will wait 60s for crictl version
	I0829 19:34:51.383638   79073 ssh_runner.go:195] Run: which crictl
	I0829 19:34:51.387081   79073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:34:51.426413   79073 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:34:51.426491   79073 ssh_runner.go:195] Run: crio --version
	I0829 19:34:51.453777   79073 ssh_runner.go:195] Run: crio --version
	I0829 19:34:51.481306   79073 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:34:50.118500   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Start
	I0829 19:34:50.118776   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring networks are active...
	I0829 19:34:50.119618   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring network default is active
	I0829 19:34:50.120105   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring network mk-default-k8s-diff-port-672127 is active
	I0829 19:34:50.120501   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Getting domain xml...
	I0829 19:34:50.121238   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Creating domain...
	I0829 19:34:51.414344   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting to get IP...
	I0829 19:34:51.415308   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.415709   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.415790   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:51.415692   80540 retry.go:31] will retry after 256.92247ms: waiting for machine to come up
	I0829 19:34:51.674173   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.674728   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.674754   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:51.674670   80540 retry.go:31] will retry after 338.812858ms: waiting for machine to come up
	I0829 19:34:52.015450   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.015977   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.016009   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.015920   80540 retry.go:31] will retry after 385.497306ms: waiting for machine to come up
	I0829 19:34:52.403718   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.404324   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.404361   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.404259   80540 retry.go:31] will retry after 536.615454ms: waiting for machine to come up
	I0829 19:34:52.943166   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.943709   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.943736   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.943678   80540 retry.go:31] will retry after 584.895039ms: waiting for machine to come up
	I0829 19:34:51.482485   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:51.485272   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:51.485599   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:51.485632   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:51.485803   79073 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 19:34:51.490493   79073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:34:51.505212   79073 kubeadm.go:883] updating cluster {Name:embed-certs-920571 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:34:51.505359   79073 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:34:51.505413   79073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:34:51.539415   79073 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:34:51.539485   79073 ssh_runner.go:195] Run: which lz4
	I0829 19:34:51.543107   79073 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:34:51.546831   79073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:34:51.546864   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:34:52.815579   79073 crio.go:462] duration metric: took 1.272496626s to copy over tarball
	I0829 19:34:52.815659   79073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:34:53.530873   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:53.531510   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:53.531540   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:53.531452   80540 retry.go:31] will retry after 790.882954ms: waiting for machine to come up
	I0829 19:34:54.324385   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:54.324785   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:54.324817   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:54.324706   80540 retry.go:31] will retry after 815.842176ms: waiting for machine to come up
	I0829 19:34:55.142878   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:55.143350   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:55.143375   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:55.143325   80540 retry.go:31] will retry after 1.177682749s: waiting for machine to come up
	I0829 19:34:56.322780   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:56.323215   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:56.323248   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:56.323160   80540 retry.go:31] will retry after 1.158169512s: waiting for machine to come up
	I0829 19:34:57.483529   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:57.483990   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:57.484023   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:57.483917   80540 retry.go:31] will retry after 1.631842784s: waiting for machine to come up
	I0829 19:34:54.931044   79073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.115353131s)
	I0829 19:34:54.931077   79073 crio.go:469] duration metric: took 2.115468165s to extract the tarball
	I0829 19:34:54.931086   79073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:34:54.967902   79073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:34:55.006987   79073 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:34:55.007010   79073 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:34:55.007017   79073 kubeadm.go:934] updating node { 192.168.61.243 8443 v1.31.0 crio true true} ...
	I0829 19:34:55.007123   79073 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-920571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:34:55.007187   79073 ssh_runner.go:195] Run: crio config
	I0829 19:34:55.051987   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:34:55.052016   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:34:55.052039   79073 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:34:55.052077   79073 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-920571 NodeName:embed-certs-920571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:34:55.052254   79073 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-920571"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:34:55.052337   79073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:34:55.061509   79073 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:34:55.061586   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:34:55.070182   79073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 19:34:55.086180   79073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:34:55.103184   79073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 19:34:55.119226   79073 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0829 19:34:55.122845   79073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:34:55.133782   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:34:55.266431   79073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:34:55.283043   79073 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571 for IP: 192.168.61.243
	I0829 19:34:55.283066   79073 certs.go:194] generating shared ca certs ...
	I0829 19:34:55.283081   79073 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:34:55.283237   79073 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:34:55.283287   79073 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:34:55.283297   79073 certs.go:256] generating profile certs ...
	I0829 19:34:55.283438   79073 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/client.key
	I0829 19:34:55.283519   79073 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.key.dda9dcff
	I0829 19:34:55.283573   79073 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.key
	I0829 19:34:55.283708   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:34:55.283773   79073 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:34:55.283793   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:34:55.283831   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:34:55.283869   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:34:55.283901   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:34:55.283957   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:34:55.284835   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:34:55.330384   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:34:55.366718   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:34:55.393815   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:34:55.436855   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 19:34:55.463343   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:34:55.487693   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:34:55.511657   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:34:55.536017   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:34:55.558298   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:34:55.579840   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:34:55.601271   79073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:34:55.616634   79073 ssh_runner.go:195] Run: openssl version
	I0829 19:34:55.621890   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:34:55.633224   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.637431   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.637486   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.643034   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:34:55.654607   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:34:55.666297   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.670433   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.670492   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.675787   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:34:55.686953   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:34:55.697241   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.701133   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.701189   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.706242   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:34:55.716165   79073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:34:55.720159   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:34:55.727612   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:34:55.734806   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:34:55.742352   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:34:55.749483   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:34:55.756543   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:34:55.763413   79073 kubeadm.go:392] StartCluster: {Name:embed-certs-920571 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:34:55.763499   79073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:34:55.763537   79073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:34:55.803136   79073 cri.go:89] found id: ""
	I0829 19:34:55.803219   79073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:34:55.812851   79073 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:34:55.812868   79073 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:34:55.812907   79073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:34:55.823461   79073 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:34:55.824969   79073 kubeconfig.go:125] found "embed-certs-920571" server: "https://192.168.61.243:8443"
	I0829 19:34:55.828095   79073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:34:55.838579   79073 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.243
	I0829 19:34:55.838616   79073 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:34:55.838626   79073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:34:55.838669   79073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:34:55.876618   79073 cri.go:89] found id: ""
	I0829 19:34:55.876674   79073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:34:55.893401   79073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:34:55.902557   79073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:34:55.902579   79073 kubeadm.go:157] found existing configuration files:
	
	I0829 19:34:55.902631   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:34:55.911349   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:34:55.911407   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:34:55.920377   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:34:55.928764   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:34:55.928824   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:34:55.937630   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:34:55.945836   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:34:55.945897   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:34:55.954491   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:34:55.962466   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:34:55.962517   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:34:55.971080   79073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:34:55.979709   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:56.086301   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.378119   79073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.29178222s)
	I0829 19:34:57.378153   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.574026   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.655499   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.755371   79073 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:34:57.755457   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:58.255939   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:58.755813   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.117916   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:59.118404   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:59.118427   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:59.118355   80540 retry.go:31] will retry after 2.806936823s: waiting for machine to come up
	I0829 19:35:01.927079   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:01.927450   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:35:01.927473   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:35:01.927422   80540 retry.go:31] will retry after 3.008556566s: waiting for machine to come up
	I0829 19:34:59.255536   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.756296   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.802484   79073 api_server.go:72] duration metric: took 2.047112988s to wait for apiserver process to appear ...
	I0829 19:34:59.802516   79073 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:34:59.802537   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:34:59.803088   79073 api_server.go:269] stopped: https://192.168.61.243:8443/healthz: Get "https://192.168.61.243:8443/healthz": dial tcp 192.168.61.243:8443: connect: connection refused
	I0829 19:35:00.302707   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.439793   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:02.439825   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:02.439837   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.482217   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:02.482245   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:02.802617   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.811079   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:02.811116   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:03.303128   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:03.307613   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:03.307657   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:03.803189   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:03.809164   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0829 19:35:03.816623   79073 api_server.go:141] control plane version: v1.31.0
	I0829 19:35:03.816649   79073 api_server.go:131] duration metric: took 4.014126212s to wait for apiserver health ...
	I0829 19:35:03.816657   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:35:03.816664   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:03.818484   79073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:35:03.819706   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:35:03.833365   79073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:35:03.851607   79073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:35:03.861274   79073 system_pods.go:59] 8 kube-system pods found
	I0829 19:35:03.861313   79073 system_pods.go:61] "coredns-6f6b679f8f-2wrn6" [05e03841-faab-4fd4-88c9-199b39a71ba6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:35:03.861320   79073 system_pods.go:61] "etcd-embed-certs-920571" [5545a51a-3b76-4b39-b347-6f68b8d7edbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:35:03.861328   79073 system_pods.go:61] "kube-apiserver-embed-certs-920571" [cecb3e4e-9d55-4dc9-8d14-884ffbf56475] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:35:03.861334   79073 system_pods.go:61] "kube-controller-manager-embed-certs-920571" [77e06ace-0262-418f-b41c-700aabf2fa1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:35:03.861338   79073 system_pods.go:61] "kube-proxy-hflpk" [a57a1785-8ccf-4955-b5b2-19c72032d9f5] Running
	I0829 19:35:03.861353   79073 system_pods.go:61] "kube-scheduler-embed-certs-920571" [bdb2ed9c-3bf2-4e91-b6a4-ba947dab93ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:35:03.861359   79073 system_pods.go:61] "metrics-server-6867b74b74-xs5gp" [98380519-4a65-4208-b9cc-f1941a5c2f01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:35:03.861362   79073 system_pods.go:61] "storage-provisioner" [d18a769f-283f-4db3-aad0-82fc0267980f] Running
	I0829 19:35:03.861368   79073 system_pods.go:74] duration metric: took 9.738329ms to wait for pod list to return data ...
	I0829 19:35:03.861375   79073 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:35:03.865311   79073 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:35:03.865341   79073 node_conditions.go:123] node cpu capacity is 2
	I0829 19:35:03.865355   79073 node_conditions.go:105] duration metric: took 3.974661ms to run NodePressure ...
	I0829 19:35:03.865373   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:04.939084   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:04.939532   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:35:04.939567   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:35:04.939479   80540 retry.go:31] will retry after 3.738266407s: waiting for machine to come up
	I0829 19:35:04.123411   79073 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:35:04.127613   79073 kubeadm.go:739] kubelet initialised
	I0829 19:35:04.127639   79073 kubeadm.go:740] duration metric: took 4.197494ms waiting for restarted kubelet to initialise ...
	I0829 19:35:04.127649   79073 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:35:04.132339   79073 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.136884   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.136909   79073 pod_ready.go:82] duration metric: took 4.548897ms for pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.136917   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.136927   79073 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.141014   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "etcd-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.141037   79073 pod_ready.go:82] duration metric: took 4.103179ms for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.141048   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "etcd-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.141062   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.144778   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.144799   79073 pod_ready.go:82] duration metric: took 3.728001ms for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.144807   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.144812   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.255204   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.255227   79073 pod_ready.go:82] duration metric: took 110.408053ms for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.255247   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.255253   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hflpk" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.656086   79073 pod_ready.go:93] pod "kube-proxy-hflpk" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:04.656124   79073 pod_ready.go:82] duration metric: took 400.860776ms for pod "kube-proxy-hflpk" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.656137   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:06.674533   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:09.990963   79869 start.go:364] duration metric: took 3m11.14829615s to acquireMachinesLock for "old-k8s-version-467349"
	I0829 19:35:09.991026   79869 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:09.991035   79869 fix.go:54] fixHost starting: 
	I0829 19:35:09.991429   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:09.991472   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:10.011456   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0829 19:35:10.011867   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:10.012413   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:35:10.012445   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:10.012752   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:10.012960   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:10.013132   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetState
	I0829 19:35:10.014878   79869 fix.go:112] recreateIfNeeded on old-k8s-version-467349: state=Stopped err=<nil>
	I0829 19:35:10.014907   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	W0829 19:35:10.015055   79869 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:10.016684   79869 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467349" ...
	I0829 19:35:08.681559   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.682042   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Found IP for machine: 192.168.50.70
	I0829 19:35:08.682070   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has current primary IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.682080   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Reserving static IP address...
	I0829 19:35:08.682524   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Reserved static IP address: 192.168.50.70
	I0829 19:35:08.682564   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-672127", mac: "52:54:00:db:a8:cf", ip: "192.168.50.70"} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.682580   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for SSH to be available...
	I0829 19:35:08.682609   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | skip adding static IP to network mk-default-k8s-diff-port-672127 - found existing host DHCP lease matching {name: "default-k8s-diff-port-672127", mac: "52:54:00:db:a8:cf", ip: "192.168.50.70"}
	I0829 19:35:08.682623   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Getting to WaitForSSH function...
	I0829 19:35:08.684466   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.684816   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.684876   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.684957   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Using SSH client type: external
	I0829 19:35:08.684982   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa (-rw-------)
	I0829 19:35:08.685032   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:08.685053   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | About to run SSH command:
	I0829 19:35:08.685069   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | exit 0
	I0829 19:35:08.806174   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:08.806493   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetConfigRaw
	I0829 19:35:08.807134   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:08.809574   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.809900   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.809924   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.810227   79559 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/config.json ...
	I0829 19:35:08.810457   79559 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:08.810478   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:08.810675   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:08.812964   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.813337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.813368   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.813620   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:08.813815   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.813994   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.814161   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:08.814338   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:08.814533   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:08.814544   79559 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:08.914370   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:08.914415   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:08.914742   79559 buildroot.go:166] provisioning hostname "default-k8s-diff-port-672127"
	I0829 19:35:08.914782   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:08.914975   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:08.918471   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.918829   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.918857   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.919021   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:08.919186   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.919373   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.919483   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:08.919664   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:08.919865   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:08.919884   79559 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-672127 && echo "default-k8s-diff-port-672127" | sudo tee /etc/hostname
	I0829 19:35:09.032573   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-672127
	
	I0829 19:35:09.032606   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.035434   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.035811   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.035840   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.035999   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.036182   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.036350   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.036465   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.036651   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.036833   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.036852   79559 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-672127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-672127/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-672127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:09.142908   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:09.142937   79559 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:09.142978   79559 buildroot.go:174] setting up certificates
	I0829 19:35:09.142995   79559 provision.go:84] configureAuth start
	I0829 19:35:09.143010   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:09.143258   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:09.145947   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.146313   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.146339   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.146460   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.148631   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.148953   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.148978   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.149128   79559 provision.go:143] copyHostCerts
	I0829 19:35:09.149188   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:09.149204   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:09.149261   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:09.149368   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:09.149378   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:09.149400   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:09.149492   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:09.149501   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:09.149520   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:09.149578   79559 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-672127 san=[127.0.0.1 192.168.50.70 default-k8s-diff-port-672127 localhost minikube]
	I0829 19:35:09.370220   79559 provision.go:177] copyRemoteCerts
	I0829 19:35:09.370277   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:09.370301   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.373233   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.373723   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.373756   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.373966   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.374180   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.374342   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.374496   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.457104   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:35:09.481139   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:09.504611   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 19:35:09.529597   79559 provision.go:87] duration metric: took 386.586301ms to configureAuth
	I0829 19:35:09.529628   79559 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:09.529887   79559 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:35:09.529989   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.532809   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.533309   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.533342   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.533509   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.533743   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.533965   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.534169   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.534372   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.534523   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.534545   79559 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:09.754724   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:09.754752   79559 machine.go:96] duration metric: took 944.279776ms to provisionDockerMachine
	I0829 19:35:09.754766   79559 start.go:293] postStartSetup for "default-k8s-diff-port-672127" (driver="kvm2")
	I0829 19:35:09.754781   79559 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:09.754807   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.755236   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:09.755270   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.757713   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.758079   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.758125   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.758274   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.758466   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.758682   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.758823   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.841022   79559 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:09.846051   79559 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:09.846081   79559 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:09.846163   79559 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:09.846254   79559 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:09.846379   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:09.857443   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:09.884662   79559 start.go:296] duration metric: took 129.87923ms for postStartSetup
	I0829 19:35:09.884715   79559 fix.go:56] duration metric: took 19.789853711s for fixHost
	I0829 19:35:09.884739   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.888011   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.888562   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.888593   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.888789   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.888976   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.889188   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.889347   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.889533   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.889723   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.889736   79559 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:09.990749   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960109.967111721
	
	I0829 19:35:09.990772   79559 fix.go:216] guest clock: 1724960109.967111721
	I0829 19:35:09.990782   79559 fix.go:229] Guest: 2024-08-29 19:35:09.967111721 +0000 UTC Remote: 2024-08-29 19:35:09.884720437 +0000 UTC m=+231.415600706 (delta=82.391284ms)
	I0829 19:35:09.990835   79559 fix.go:200] guest clock delta is within tolerance: 82.391284ms
	I0829 19:35:09.990846   79559 start.go:83] releasing machines lock for "default-k8s-diff-port-672127", held for 19.896020367s
	I0829 19:35:09.990891   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.991180   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:09.994076   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.994434   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.994459   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.994613   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995121   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995318   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995407   79559 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:09.995464   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.995531   79559 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:09.995569   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.998302   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998673   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.998703   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998732   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.998750   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998832   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.998976   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.999026   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.999109   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.999162   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.999249   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.999404   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.999395   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:10.124503   79559 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:10.130734   79559 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:10.275859   79559 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:10.281662   79559 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:10.281728   79559 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:10.297464   79559 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:10.297488   79559 start.go:495] detecting cgroup driver to use...
	I0829 19:35:10.297553   79559 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:10.316686   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:10.332836   79559 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:10.332880   79559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:10.347021   79559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:10.364479   79559 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:10.506136   79559 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:10.659246   79559 docker.go:233] disabling docker service ...
	I0829 19:35:10.659324   79559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:10.678953   79559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:10.694844   79559 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:10.837509   79559 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:10.976512   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:10.993421   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:11.013434   79559 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:35:11.013492   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.023909   79559 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:11.023980   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.038560   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.049911   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.060235   79559 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:11.076772   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.093357   79559 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.110140   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.121770   79559 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:11.131641   79559 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:11.131697   79559 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:11.151460   79559 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:11.161320   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:11.286180   79559 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:11.382235   79559 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:11.382312   79559 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:11.388226   79559 start.go:563] Will wait 60s for crictl version
	I0829 19:35:11.388299   79559 ssh_runner.go:195] Run: which crictl
	I0829 19:35:11.391832   79559 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:11.429509   79559 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:11.429601   79559 ssh_runner.go:195] Run: crio --version
	I0829 19:35:11.457180   79559 ssh_runner.go:195] Run: crio --version
	I0829 19:35:11.487106   79559 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:35:11.488483   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:11.491607   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:11.491988   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:11.492027   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:11.492316   79559 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:11.496448   79559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:11.512045   79559 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-672127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:11.512159   79559 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:35:11.512219   79559 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:11.549212   79559 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:35:11.549287   79559 ssh_runner.go:195] Run: which lz4
	I0829 19:35:11.554151   79559 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:11.558691   79559 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:11.558718   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:35:12.826290   79559 crio.go:462] duration metric: took 1.272173781s to copy over tarball
	I0829 19:35:12.826387   79559 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:10.017965   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .Start
	I0829 19:35:10.018195   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring networks are active...
	I0829 19:35:10.018992   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network default is active
	I0829 19:35:10.019360   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network mk-old-k8s-version-467349 is active
	I0829 19:35:10.019708   79869 main.go:141] libmachine: (old-k8s-version-467349) Getting domain xml...
	I0829 19:35:10.020408   79869 main.go:141] libmachine: (old-k8s-version-467349) Creating domain...
	I0829 19:35:11.298443   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting to get IP...
	I0829 19:35:11.299521   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.300063   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.300152   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.300048   80714 retry.go:31] will retry after 253.519755ms: waiting for machine to come up
	I0829 19:35:11.555694   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.556242   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.556274   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.556187   80714 retry.go:31] will retry after 375.22671ms: waiting for machine to come up
	I0829 19:35:11.932780   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.933206   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.933233   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.933176   80714 retry.go:31] will retry after 329.139276ms: waiting for machine to come up
	I0829 19:35:12.263804   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.264471   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.264501   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.264437   80714 retry.go:31] will retry after 434.457682ms: waiting for machine to come up
	I0829 19:35:12.701184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.701773   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.701805   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.701691   80714 retry.go:31] will retry after 555.961608ms: waiting for machine to come up
	I0829 19:35:13.259670   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:13.260159   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:13.260184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:13.260080   80714 retry.go:31] will retry after 814.491179ms: waiting for machine to come up
	I0829 19:35:09.162551   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:11.165654   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:13.662027   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:15.034221   79559 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.207800368s)
	I0829 19:35:15.034254   79559 crio.go:469] duration metric: took 2.207935139s to extract the tarball
	I0829 19:35:15.034263   79559 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:15.070411   79559 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:15.117649   79559 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:35:15.117675   79559 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:35:15.117684   79559 kubeadm.go:934] updating node { 192.168.50.70 8444 v1.31.0 crio true true} ...
	I0829 19:35:15.117793   79559 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-672127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:15.117873   79559 ssh_runner.go:195] Run: crio config
	I0829 19:35:15.161749   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:35:15.161778   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:15.161795   79559 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:15.161815   79559 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-672127 NodeName:default-k8s-diff-port-672127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:35:15.161949   79559 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-672127"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:15.162002   79559 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:35:15.171789   79559 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:15.171858   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:15.181011   79559 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0829 19:35:15.197394   79559 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:15.213309   79559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0829 19:35:15.231088   79559 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:15.234732   79559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:15.245700   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:15.368430   79559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:15.385792   79559 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127 for IP: 192.168.50.70
	I0829 19:35:15.385820   79559 certs.go:194] generating shared ca certs ...
	I0829 19:35:15.385844   79559 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:15.386020   79559 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:15.386108   79559 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:15.386123   79559 certs.go:256] generating profile certs ...
	I0829 19:35:15.386240   79559 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/client.key
	I0829 19:35:15.386324   79559 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.key.828c23de
	I0829 19:35:15.386378   79559 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.key
	I0829 19:35:15.386523   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:15.386567   79559 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:15.386582   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:15.386615   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:15.386650   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:15.386680   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:15.386736   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:15.387663   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:15.429474   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:15.470861   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:15.514906   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:15.552474   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 19:35:15.581749   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:15.605874   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:15.629703   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:35:15.653589   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:15.680222   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:15.706824   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:15.733354   79559 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:15.753069   79559 ssh_runner.go:195] Run: openssl version
	I0829 19:35:15.759905   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:15.770507   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.776103   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.776159   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.783674   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:15.797519   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:15.809517   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.814243   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.814311   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.819834   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:15.830130   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:15.840473   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.844974   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.845033   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.850619   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:15.860955   79559 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:15.865359   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:15.871149   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:15.876982   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:15.882635   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:15.888020   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:15.893423   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:15.898989   79559 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-672127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:15.899085   79559 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:15.899156   79559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:15.939743   79559 cri.go:89] found id: ""
	I0829 19:35:15.939817   79559 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:15.949877   79559 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:15.949896   79559 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:15.949938   79559 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:15.959436   79559 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:15.960417   79559 kubeconfig.go:125] found "default-k8s-diff-port-672127" server: "https://192.168.50.70:8444"
	I0829 19:35:15.962469   79559 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:15.971672   79559 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0829 19:35:15.971700   79559 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:15.971710   79559 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:15.971777   79559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:16.015084   79559 cri.go:89] found id: ""
	I0829 19:35:16.015173   79559 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:16.031614   79559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:16.044359   79559 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:16.044384   79559 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:16.044448   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 19:35:16.056073   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:16.056139   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:16.066426   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 19:35:16.075300   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:16.075368   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:16.084795   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 19:35:16.093739   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:16.093804   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:16.103539   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 19:35:16.112676   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:16.112744   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:16.121997   79559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:16.134461   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:16.246853   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.577230   79559 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.330337638s)
	I0829 19:35:17.577271   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.810593   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.892546   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.993500   79559 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:17.993595   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:18.494169   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:14.076091   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.076599   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.076622   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.076549   80714 retry.go:31] will retry after 864.469682ms: waiting for machine to come up
	I0829 19:35:14.942675   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.943123   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.943154   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.943068   80714 retry.go:31] will retry after 1.062037578s: waiting for machine to come up
	I0829 19:35:16.006750   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:16.007301   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:16.007336   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:16.007212   80714 retry.go:31] will retry after 1.22747505s: waiting for machine to come up
	I0829 19:35:17.236788   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:17.237262   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:17.237291   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:17.237216   80714 retry.go:31] will retry after 1.663870598s: waiting for machine to come up
	I0829 19:35:15.662198   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:16.162890   79073 pod_ready.go:93] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:16.162919   79073 pod_ready.go:82] duration metric: took 11.506772145s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:16.162931   79073 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:18.170586   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:18.994574   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:19.493764   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:19.509384   79559 api_server.go:72] duration metric: took 1.515882118s to wait for apiserver process to appear ...
	I0829 19:35:19.509415   79559 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:35:19.509440   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:21.555577   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:21.555625   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:21.555642   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:21.572445   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:21.572481   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:22.009612   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:22.017592   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:22.017627   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:22.510148   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:22.516104   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:22.516140   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:23.009648   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:23.016342   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 200:
	ok
	I0829 19:35:23.022852   79559 api_server.go:141] control plane version: v1.31.0
	I0829 19:35:23.022878   79559 api_server.go:131] duration metric: took 3.513455745s to wait for apiserver health ...
	I0829 19:35:23.022889   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:35:23.022897   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:23.024557   79559 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:35:23.025764   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:35:23.035743   79559 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:35:23.075272   79559 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:35:23.091948   79559 system_pods.go:59] 8 kube-system pods found
	I0829 19:35:23.091991   79559 system_pods.go:61] "coredns-6f6b679f8f-p92hj" [736e7c46-b945-445f-a404-20a609f766e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:35:23.092004   79559 system_pods.go:61] "etcd-default-k8s-diff-port-672127" [cf016602-46cd-4972-bdd3-1ef5d881b6e0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:35:23.092014   79559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-672127" [eb51ac87-f5e4-4031-84fe-811da2ff8d63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:35:23.092026   79559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-672127" [caf7b777-935f-4351-b58d-60bb8175bec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:35:23.092034   79559 system_pods.go:61] "kube-proxy-tlc89" [9a11e5a6-b624-494b-8e94-d362b94fb98e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 19:35:23.092043   79559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-672127" [fe83e2af-b046-4d56-9b5c-d7a17db7e854] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:35:23.092053   79559 system_pods.go:61] "metrics-server-6867b74b74-tbkxg" [6d8f8c92-4f89-4a2a-8690-51a850768516] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:35:23.092065   79559 system_pods.go:61] "storage-provisioner" [7349bb79-c402-4587-ab0b-e52e5d455c61] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:35:23.092078   79559 system_pods.go:74] duration metric: took 16.779413ms to wait for pod list to return data ...
	I0829 19:35:23.092091   79559 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:35:23.099492   79559 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:35:23.099533   79559 node_conditions.go:123] node cpu capacity is 2
	I0829 19:35:23.099547   79559 node_conditions.go:105] duration metric: took 7.450351ms to run NodePressure ...
	I0829 19:35:23.099571   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:23.371279   79559 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:35:23.377322   79559 kubeadm.go:739] kubelet initialised
	I0829 19:35:23.377346   79559 kubeadm.go:740] duration metric: took 6.045074ms waiting for restarted kubelet to initialise ...
	I0829 19:35:23.377353   79559 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:35:23.384232   79559 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.391931   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.391960   79559 pod_ready.go:82] duration metric: took 7.702072ms for pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.391971   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.391980   79559 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.396708   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.396728   79559 pod_ready.go:82] duration metric: took 4.739691ms for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.396736   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.396744   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.401274   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.401298   79559 pod_ready.go:82] duration metric: took 4.546455ms for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.401308   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.401314   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:18.903082   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:18.903668   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:18.903691   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:18.903624   80714 retry.go:31] will retry after 2.012998698s: waiting for machine to come up
	I0829 19:35:20.918657   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:20.919143   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:20.919179   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:20.919066   80714 retry.go:31] will retry after 2.674645507s: waiting for machine to come up
	I0829 19:35:23.595218   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:23.595658   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:23.595685   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:23.595633   80714 retry.go:31] will retry after 3.052784769s: waiting for machine to come up
	I0829 19:35:20.670356   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:22.670699   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.786910   78865 start.go:364] duration metric: took 49.670356886s to acquireMachinesLock for "no-preload-690795"
	I0829 19:35:27.786963   78865 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:27.786975   78865 fix.go:54] fixHost starting: 
	I0829 19:35:27.787377   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:27.787425   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:27.803558   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0829 19:35:27.803903   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:27.804328   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:35:27.804348   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:27.804623   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:27.804824   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:27.804967   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:35:27.806332   78865 fix.go:112] recreateIfNeeded on no-preload-690795: state=Stopped err=<nil>
	I0829 19:35:27.806353   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	W0829 19:35:27.806525   78865 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:27.808678   78865 out.go:177] * Restarting existing kvm2 VM for "no-preload-690795" ...
	I0829 19:35:25.407622   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.910410   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:26.649643   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650117   79869 main.go:141] libmachine: (old-k8s-version-467349) Found IP for machine: 192.168.72.112
	I0829 19:35:26.650146   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserving static IP address...
	I0829 19:35:26.650161   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has current primary IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650553   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.650579   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserved static IP address: 192.168.72.112
	I0829 19:35:26.650600   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | skip adding static IP to network mk-old-k8s-version-467349 - found existing host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"}
	I0829 19:35:26.650611   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting for SSH to be available...
	I0829 19:35:26.650640   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Getting to WaitForSSH function...
	I0829 19:35:26.653157   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653509   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.653528   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653667   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH client type: external
	I0829 19:35:26.653690   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa (-rw-------)
	I0829 19:35:26.653724   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:26.653741   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | About to run SSH command:
	I0829 19:35:26.653755   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | exit 0
	I0829 19:35:26.778126   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:26.778436   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetConfigRaw
	I0829 19:35:26.779002   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:26.781392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.781745   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.781778   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.782006   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:35:26.782229   79869 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:26.782249   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:26.782509   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.784806   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785130   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.785148   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785300   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.785462   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785611   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785799   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.785923   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.786126   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.786138   79869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:26.886223   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:26.886256   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886522   79869 buildroot.go:166] provisioning hostname "old-k8s-version-467349"
	I0829 19:35:26.886563   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886756   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.889874   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890304   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.890324   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890471   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.890655   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890821   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890969   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.891131   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.891333   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.891348   79869 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467349 && echo "old-k8s-version-467349" | sudo tee /etc/hostname
	I0829 19:35:27.007493   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467349
	
	I0829 19:35:27.007535   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.010202   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010526   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.010548   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010737   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.010913   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011080   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011225   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.011395   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.011548   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.011564   79869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467349/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:27.123357   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:27.123385   79869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:27.123436   79869 buildroot.go:174] setting up certificates
	I0829 19:35:27.123445   79869 provision.go:84] configureAuth start
	I0829 19:35:27.123455   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:27.123760   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.126486   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.126819   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.126857   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.127013   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.129089   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129404   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.129429   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129554   79869 provision.go:143] copyHostCerts
	I0829 19:35:27.129614   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:27.129636   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:27.129704   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:27.129825   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:27.129840   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:27.129871   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:27.129946   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:27.129956   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:27.129982   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:27.130043   79869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467349 san=[127.0.0.1 192.168.72.112 localhost minikube old-k8s-version-467349]
	I0829 19:35:27.190556   79869 provision.go:177] copyRemoteCerts
	I0829 19:35:27.190610   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:27.190667   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.193785   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194205   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.194243   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194406   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.194620   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.194788   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.194962   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.276099   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:27.299820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 19:35:27.323625   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:27.347943   79869 provision.go:87] duration metric: took 224.487094ms to configureAuth
	I0829 19:35:27.347970   79869 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:27.348140   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:35:27.348203   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.351042   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.351420   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351654   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.351860   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352030   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352159   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.352321   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.352487   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.352504   79869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:27.565849   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:27.565874   79869 machine.go:96] duration metric: took 783.631791ms to provisionDockerMachine
	I0829 19:35:27.565886   79869 start.go:293] postStartSetup for "old-k8s-version-467349" (driver="kvm2")
	I0829 19:35:27.565897   79869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:27.565935   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.566274   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:27.566332   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.568900   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569225   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.569258   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569424   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.569613   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.569795   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.569961   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.648057   79869 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:27.651955   79869 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:27.651984   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:27.652057   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:27.652167   79869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:27.652311   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:27.660961   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:27.684179   79869 start.go:296] duration metric: took 118.281042ms for postStartSetup
	I0829 19:35:27.684251   79869 fix.go:56] duration metric: took 17.69321583s for fixHost
	I0829 19:35:27.684277   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.686877   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687235   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.687266   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687429   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.687615   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687751   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687863   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.687994   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.688202   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.688220   79869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:27.786754   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960127.745017542
	
	I0829 19:35:27.786773   79869 fix.go:216] guest clock: 1724960127.745017542
	I0829 19:35:27.786780   79869 fix.go:229] Guest: 2024-08-29 19:35:27.745017542 +0000 UTC Remote: 2024-08-29 19:35:27.684258077 +0000 UTC m=+208.981895804 (delta=60.759465ms)
	I0829 19:35:27.786798   79869 fix.go:200] guest clock delta is within tolerance: 60.759465ms
	I0829 19:35:27.786803   79869 start.go:83] releasing machines lock for "old-k8s-version-467349", held for 17.795804036s
	I0829 19:35:27.786823   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.787066   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.789617   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.789937   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.789967   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.790124   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790514   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790689   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790781   79869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:27.790827   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.790912   79869 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:27.790937   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.793406   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793495   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793732   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793762   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793781   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793821   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793910   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794075   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794076   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794242   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794419   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.794435   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794646   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794811   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.910665   79869 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:27.916917   79869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:28.063525   79869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:28.070848   79869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:28.070907   79869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:28.089204   79869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:28.089226   79869 start.go:495] detecting cgroup driver to use...
	I0829 19:35:28.089291   79869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:28.108528   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:28.122248   79869 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:28.122353   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:28.143014   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:28.159322   79869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:28.281356   79869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:28.445101   79869 docker.go:233] disabling docker service ...
	I0829 19:35:28.445162   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:28.460437   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:28.474849   79869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:28.609747   79869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:28.734733   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:25.170397   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.669465   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:28.748605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:28.766945   79869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 19:35:28.767014   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.776535   79869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:28.776598   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.787050   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.797552   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.807575   79869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:28.818319   79869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:28.827289   79869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:28.827342   79869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:28.839995   79869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:28.849779   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:28.979701   79869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:29.092264   79869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:29.092344   79869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:29.097310   79869 start.go:563] Will wait 60s for crictl version
	I0829 19:35:29.097366   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:29.101080   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:29.146142   79869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:29.146228   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.176037   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.210024   79869 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 19:35:27.810111   78865 main.go:141] libmachine: (no-preload-690795) Calling .Start
	I0829 19:35:27.810300   78865 main.go:141] libmachine: (no-preload-690795) Ensuring networks are active...
	I0829 19:35:27.811063   78865 main.go:141] libmachine: (no-preload-690795) Ensuring network default is active
	I0829 19:35:27.811464   78865 main.go:141] libmachine: (no-preload-690795) Ensuring network mk-no-preload-690795 is active
	I0829 19:35:27.811848   78865 main.go:141] libmachine: (no-preload-690795) Getting domain xml...
	I0829 19:35:27.812590   78865 main.go:141] libmachine: (no-preload-690795) Creating domain...
	I0829 19:35:29.131821   78865 main.go:141] libmachine: (no-preload-690795) Waiting to get IP...
	I0829 19:35:29.132876   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.133519   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.133595   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.133481   80876 retry.go:31] will retry after 252.123266ms: waiting for machine to come up
	I0829 19:35:29.387046   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.387534   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.387561   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.387496   80876 retry.go:31] will retry after 304.157394ms: waiting for machine to come up
	I0829 19:35:29.693891   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.694581   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.694603   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.694560   80876 retry.go:31] will retry after 366.980614ms: waiting for machine to come up
	I0829 19:35:30.063032   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:30.063466   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:30.063504   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:30.063431   80876 retry.go:31] will retry after 562.46082ms: waiting for machine to come up
	I0829 19:35:30.412868   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:32.908366   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:33.408823   79559 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.408848   79559 pod_ready.go:82] duration metric: took 10.007525744s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.408862   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tlc89" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.418176   79559 pod_ready.go:93] pod "kube-proxy-tlc89" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.418202   79559 pod_ready.go:82] duration metric: took 9.33136ms for pod "kube-proxy-tlc89" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.418214   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.424362   79559 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.424388   79559 pod_ready.go:82] duration metric: took 6.165646ms for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.424401   79559 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:29.211072   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:29.214489   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.214897   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:29.214932   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.215196   79869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:29.219742   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:29.233815   79869 kubeadm.go:883] updating cluster {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:29.233934   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:35:29.233994   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:29.281512   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:29.281579   79869 ssh_runner.go:195] Run: which lz4
	I0829 19:35:29.285825   79869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:29.290303   79869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:29.290349   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 19:35:30.843642   79869 crio.go:462] duration metric: took 1.557868582s to copy over tarball
	I0829 19:35:30.843714   79869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:29.670803   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:32.171154   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:30.627531   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:30.628123   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:30.628147   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:30.628030   80876 retry.go:31] will retry after 488.97189ms: waiting for machine to come up
	I0829 19:35:31.118901   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:31.119457   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:31.119480   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:31.119398   80876 retry.go:31] will retry after 801.189699ms: waiting for machine to come up
	I0829 19:35:31.921939   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:31.922447   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:31.922482   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:31.922391   80876 retry.go:31] will retry after 828.788864ms: waiting for machine to come up
	I0829 19:35:32.752986   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:32.753429   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:32.753465   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:32.753385   80876 retry.go:31] will retry after 1.404436811s: waiting for machine to come up
	I0829 19:35:34.159129   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:34.159714   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:34.159741   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:34.159678   80876 retry.go:31] will retry after 1.312099391s: waiting for machine to come up
	I0829 19:35:35.473045   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:35.473510   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:35.473549   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:35.473461   80876 retry.go:31] will retry after 1.46129368s: waiting for machine to come up
	I0829 19:35:35.431524   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:37.437993   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:33.827965   79869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984226389s)
	I0829 19:35:33.827993   79869 crio.go:469] duration metric: took 2.98432047s to extract the tarball
	I0829 19:35:33.828004   79869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:33.869606   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:33.902753   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:33.902782   79869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:33.902862   79869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.902867   79869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.902869   79869 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.902882   79869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:33.903054   79869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.903000   79869 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 19:35:33.902955   79869 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.902978   79869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.904938   79869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904960   79869 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 19:35:33.904917   79869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.904920   79869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.159604   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 19:35:34.195935   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.208324   79869 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 19:35:34.208373   79869 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 19:35:34.208414   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.229776   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.231728   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.241303   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.243523   79869 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 19:35:34.243572   79869 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.243589   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.243612   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.256377   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.291584   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.339295   79869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 19:35:34.339344   79869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.339396   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364510   79869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 19:35:34.364559   79869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.364565   79869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 19:35:34.364598   79869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.364608   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364636   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.364641   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.364642   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.370545   79869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 19:35:34.370580   79869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.370621   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.401578   79869 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 19:35:34.401628   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.401634   79869 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.401651   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.401669   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.452408   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.452472   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.452530   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.452479   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.498680   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.502698   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.502722   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.608235   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.608332   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 19:35:34.608345   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.608302   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.647702   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.647744   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.647784   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.771634   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.771691   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 19:35:34.771642   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.771742   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 19:35:34.771818   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 19:35:34.790517   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.826666   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 19:35:34.832449   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 19:35:34.850172   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 19:35:35.112084   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:35.251873   79869 cache_images.go:92] duration metric: took 1.34907399s to LoadCachedImages
	W0829 19:35:35.251967   79869 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0829 19:35:35.251984   79869 kubeadm.go:934] updating node { 192.168.72.112 8443 v1.20.0 crio true true} ...
	I0829 19:35:35.252130   79869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467349 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:35.252215   79869 ssh_runner.go:195] Run: crio config
	I0829 19:35:35.307174   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:35:35.307205   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:35.307229   79869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:35.307253   79869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467349 NodeName:old-k8s-version-467349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 19:35:35.307421   79869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467349"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:35.307498   79869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 19:35:35.317493   79869 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:35.317574   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:35.327102   79869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 19:35:35.343936   79869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:35.362420   79869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 19:35:35.379862   79869 ssh_runner.go:195] Run: grep 192.168.72.112	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:35.383595   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:35.396175   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:35.513069   79869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:35.535454   79869 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349 for IP: 192.168.72.112
	I0829 19:35:35.535481   79869 certs.go:194] generating shared ca certs ...
	I0829 19:35:35.535500   79869 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:35.535693   79869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:35.535751   79869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:35.535764   79869 certs.go:256] generating profile certs ...
	I0829 19:35:35.535885   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.key
	I0829 19:35:35.535962   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key.b97fdb0f
	I0829 19:35:35.536010   79869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key
	I0829 19:35:35.536160   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:35.536198   79869 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:35.536212   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:35.536255   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:35.536289   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:35.536345   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:35.536403   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:35.537270   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:35.573137   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:35.605232   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:35.633800   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:35.681773   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 19:35:35.711207   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:35.748040   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:35.774144   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:35:35.805029   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:35.833761   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:35.856820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:35.883402   79869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:35.902258   79869 ssh_runner.go:195] Run: openssl version
	I0829 19:35:35.908223   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:35.919106   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923368   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923414   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.930431   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:35.941856   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:35.953186   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957279   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957351   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.963886   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:35.976058   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:35.986836   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991417   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991482   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.997160   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:36.009731   79869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:36.015343   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:36.022897   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:36.028976   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:36.036658   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:36.042513   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:36.048085   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:36.053863   79869 kubeadm.go:392] StartCluster: {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:36.053944   79869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:36.053999   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.099158   79869 cri.go:89] found id: ""
	I0829 19:35:36.099230   79869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:36.109678   79869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:36.109701   79869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:36.109751   79869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:36.119674   79869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:36.120829   79869 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-467349" does not appear in /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:35:36.121495   79869 kubeconfig.go:62] /home/jenkins/minikube-integration/19531-13056/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-467349" cluster setting kubeconfig missing "old-k8s-version-467349" context setting]
	I0829 19:35:36.122505   79869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:36.221053   79869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:36.232505   79869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.112
	I0829 19:35:36.232550   79869 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:36.232562   79869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:36.232612   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.272228   79869 cri.go:89] found id: ""
	I0829 19:35:36.272290   79869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:36.290945   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:36.301665   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:36.301688   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:36.301740   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:35:36.311828   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:36.311882   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:36.322539   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:35:36.331879   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:36.331947   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:36.343057   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.352806   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:36.352867   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.362158   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:35:36.372280   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:36.372355   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:36.383178   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:36.393699   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:36.514064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.332360   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.570906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.665203   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.764043   79869 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:37.764146   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:38.264990   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:34.172082   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:36.669124   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:38.669696   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:36.936034   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:36.936510   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:36.936539   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:36.936463   80876 retry.go:31] will retry after 1.943807762s: waiting for machine to come up
	I0829 19:35:38.881644   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:38.882110   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:38.882133   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:38.882067   80876 retry.go:31] will retry after 3.173912619s: waiting for machine to come up
	I0829 19:35:39.932725   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:42.429439   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:38.764741   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.264314   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.765085   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.264910   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.264207   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.764841   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.265060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.764958   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:43.264971   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.168816   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:43.669594   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:42.059140   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:42.059668   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:42.059692   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:42.059602   80876 retry.go:31] will retry after 4.193427915s: waiting for machine to come up
	I0829 19:35:44.430473   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:46.431149   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:43.764674   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.264893   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.764345   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.264234   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.764985   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.265107   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.764222   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.264350   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.764787   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:48.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.671012   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:48.168836   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:46.256270   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.256783   78865 main.go:141] libmachine: (no-preload-690795) Found IP for machine: 192.168.39.76
	I0829 19:35:46.256806   78865 main.go:141] libmachine: (no-preload-690795) Reserving static IP address...
	I0829 19:35:46.256822   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has current primary IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.257249   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "no-preload-690795", mac: "52:54:00:2b:48:ed", ip: "192.168.39.76"} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.257274   78865 main.go:141] libmachine: (no-preload-690795) Reserved static IP address: 192.168.39.76
	I0829 19:35:46.257289   78865 main.go:141] libmachine: (no-preload-690795) DBG | skip adding static IP to network mk-no-preload-690795 - found existing host DHCP lease matching {name: "no-preload-690795", mac: "52:54:00:2b:48:ed", ip: "192.168.39.76"}
	I0829 19:35:46.257299   78865 main.go:141] libmachine: (no-preload-690795) Waiting for SSH to be available...
	I0829 19:35:46.257313   78865 main.go:141] libmachine: (no-preload-690795) DBG | Getting to WaitForSSH function...
	I0829 19:35:46.259334   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.259664   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.259692   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.259788   78865 main.go:141] libmachine: (no-preload-690795) DBG | Using SSH client type: external
	I0829 19:35:46.259821   78865 main.go:141] libmachine: (no-preload-690795) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa (-rw-------)
	I0829 19:35:46.259859   78865 main.go:141] libmachine: (no-preload-690795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:46.259871   78865 main.go:141] libmachine: (no-preload-690795) DBG | About to run SSH command:
	I0829 19:35:46.259902   78865 main.go:141] libmachine: (no-preload-690795) DBG | exit 0
	I0829 19:35:46.389869   78865 main.go:141] libmachine: (no-preload-690795) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:46.390295   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetConfigRaw
	I0829 19:35:46.390987   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:46.393890   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.394310   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.394342   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.394673   78865 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/config.json ...
	I0829 19:35:46.394846   78865 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:46.394869   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:46.395082   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.397203   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.397508   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.397535   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.397676   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.397862   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.398011   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.398178   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.398314   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.398475   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.398486   78865 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:46.502132   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:46.502163   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.502426   78865 buildroot.go:166] provisioning hostname "no-preload-690795"
	I0829 19:35:46.502449   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.502642   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.505084   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.505414   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.505443   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.505665   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.505861   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.506035   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.506219   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.506379   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.506573   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.506597   78865 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-690795 && echo "no-preload-690795" | sudo tee /etc/hostname
	I0829 19:35:46.627246   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-690795
	
	I0829 19:35:46.627269   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.630081   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.630430   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.630454   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.630611   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.630780   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.630947   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.631233   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.631397   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.631545   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.631568   78865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-690795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-690795/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-690795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:46.746055   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:46.746106   78865 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:46.746131   78865 buildroot.go:174] setting up certificates
	I0829 19:35:46.746143   78865 provision.go:84] configureAuth start
	I0829 19:35:46.746160   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.746411   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:46.749125   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.749476   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.749497   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.749642   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.751828   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.752178   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.752203   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.752317   78865 provision.go:143] copyHostCerts
	I0829 19:35:46.752384   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:46.752404   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:46.752475   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:46.752580   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:46.752591   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:46.752619   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:46.752693   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:46.752703   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:46.752728   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:46.752791   78865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.no-preload-690795 san=[127.0.0.1 192.168.39.76 localhost minikube no-preload-690795]
	I0829 19:35:46.901689   78865 provision.go:177] copyRemoteCerts
	I0829 19:35:46.901744   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:46.901764   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.904873   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.905241   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.905287   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.905458   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.905657   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.905805   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.905960   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:46.988181   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:47.011149   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 19:35:47.034849   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:47.057375   78865 provision.go:87] duration metric: took 311.217634ms to configureAuth
	I0829 19:35:47.057402   78865 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:47.057599   78865 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:35:47.057695   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.060274   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.060594   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.060620   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.060750   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.060976   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.061149   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.061311   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.061465   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:47.061676   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:47.061703   78865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:47.284836   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:47.284862   78865 machine.go:96] duration metric: took 890.004565ms to provisionDockerMachine
	I0829 19:35:47.284876   78865 start.go:293] postStartSetup for "no-preload-690795" (driver="kvm2")
	I0829 19:35:47.284889   78865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:47.284909   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.285207   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:47.285232   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.287875   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.288162   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.288180   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.288391   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.288597   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.288772   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.288899   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.372833   78865 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:47.376649   78865 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:47.376670   78865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:47.376729   78865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:47.376801   78865 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:47.376881   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:47.385721   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:47.407601   78865 start.go:296] duration metric: took 122.711153ms for postStartSetup
	I0829 19:35:47.407640   78865 fix.go:56] duration metric: took 19.620666095s for fixHost
	I0829 19:35:47.407673   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.410483   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.410873   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.410903   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.411139   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.411363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.411527   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.411674   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.411830   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:47.411987   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:47.412001   78865 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:47.518841   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960147.499237123
	
	I0829 19:35:47.518864   78865 fix.go:216] guest clock: 1724960147.499237123
	I0829 19:35:47.518872   78865 fix.go:229] Guest: 2024-08-29 19:35:47.499237123 +0000 UTC Remote: 2024-08-29 19:35:47.407643858 +0000 UTC m=+351.882891548 (delta=91.593265ms)
	I0829 19:35:47.518891   78865 fix.go:200] guest clock delta is within tolerance: 91.593265ms
	I0829 19:35:47.518896   78865 start.go:83] releasing machines lock for "no-preload-690795", held for 19.731957743s
	I0829 19:35:47.518914   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.519214   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:47.521738   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.522125   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.522153   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.522310   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.522806   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.523016   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.523082   78865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:47.523127   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.523209   78865 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:47.523225   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.526076   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526443   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.526462   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526489   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526681   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.526826   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.527005   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.527036   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.527073   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.527199   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.527197   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.527370   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.527537   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.527690   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.635450   78865 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:47.641274   78865 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:47.788805   78865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:47.794545   78865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:47.794601   78865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:47.810156   78865 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:47.810175   78865 start.go:495] detecting cgroup driver to use...
	I0829 19:35:47.810228   78865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:47.825795   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:47.839011   78865 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:47.839061   78865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:47.851854   78865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:47.864467   78865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:47.999155   78865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:48.143858   78865 docker.go:233] disabling docker service ...
	I0829 19:35:48.143921   78865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:48.157740   78865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:48.172067   78865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:48.339557   78865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:48.462950   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:48.475646   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:48.492262   78865 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:35:48.492329   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.501580   78865 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:48.501647   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.511241   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.520477   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.530413   78865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:48.540457   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.551258   78865 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.567365   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.577266   78865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:48.586423   78865 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:48.586479   78865 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:48.599527   78865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:48.608666   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:48.721808   78865 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:48.811417   78865 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:48.811495   78865 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:48.816689   78865 start.go:563] Will wait 60s for crictl version
	I0829 19:35:48.816750   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:48.820563   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:48.862786   78865 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:48.862869   78865 ssh_runner.go:195] Run: crio --version
	I0829 19:35:48.889834   78865 ssh_runner.go:195] Run: crio --version
	I0829 19:35:48.918515   78865 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:35:48.919643   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:48.922182   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:48.922530   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:48.922560   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:48.922725   78865 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:48.926877   78865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:48.939254   78865 kubeadm.go:883] updating cluster {Name:no-preload-690795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:48.939379   78865 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:35:48.939413   78865 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:48.972281   78865 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:35:48.972304   78865 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:48.972345   78865 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.972361   78865 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:48.972384   78865 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:48.972425   78865 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:48.972443   78865 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:48.972452   78865 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 19:35:48.972496   78865 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:48.972558   78865 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:48.973929   78865 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:48.973938   78865 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.973979   78865 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 19:35:48.973933   78865 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:48.973931   78865 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:48.973932   78865 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:48.973938   78865 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:48.973939   78865 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.229315   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 19:35:49.232334   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.271261   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.328903   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.339435   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.349057   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.356840   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.387705   78865 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 19:35:49.387748   78865 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 19:35:49.387760   78865 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.387777   78865 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.387808   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.387829   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.389731   78865 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 19:35:49.389769   78865 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.389809   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.438231   78865 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 19:35:49.438264   78865 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.438304   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.453177   78865 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 19:35:49.453220   78865 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.453270   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.455713   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.455767   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.455802   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.455804   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.455772   78865 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 19:35:49.455895   78865 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.455921   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.458141   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.539090   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.539125   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.568605   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.573575   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.573622   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.573575   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.678619   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.680581   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.680584   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.680671   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.699638   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.706556   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.803909   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.809759   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 19:35:49.809863   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.810356   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 19:35:49.810423   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:49.811234   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 19:35:49.811285   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:49.832040   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 19:35:49.832102   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 19:35:49.832153   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:49.832162   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:49.862517   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 19:35:49.862537   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.862578   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.862653   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 19:35:49.862696   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 19:35:49.862703   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 19:35:49.862731   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 19:35:49.862760   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 19:35:49.862788   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:35:50.192890   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.930928   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:50.931805   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:53.430716   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:48.764746   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.264755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.764703   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.264240   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.764284   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.265111   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.764316   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.264213   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.764295   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:53.264451   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.168967   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:52.169327   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:51.820978   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.958376621s)
	I0829 19:35:51.821014   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 19:35:51.821035   78865 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:51.821077   78865 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.958265625s)
	I0829 19:35:51.821109   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:51.821108   78865 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.62819044s)
	I0829 19:35:51.821211   78865 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 19:35:51.821243   78865 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:51.821275   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:51.821111   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 19:35:55.931182   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:58.431477   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:53.764946   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.265076   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.764273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.264844   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.764622   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.765120   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.265199   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.764610   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:58.264296   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.669752   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:56.670764   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:55.594240   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.773093303s)
	I0829 19:35:55.594275   78865 ssh_runner.go:235] Completed: which crictl: (3.77298113s)
	I0829 19:35:55.594290   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 19:35:55.594340   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:55.594348   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:55.594403   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:57.972145   78865 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.377784997s)
	I0829 19:35:57.972180   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.377757134s)
	I0829 19:35:57.972210   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 19:35:57.972223   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:57.972237   78865 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:57.972270   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:58.025853   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:59.843856   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.871560481s)
	I0829 19:35:59.843883   78865 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.818003416s)
	I0829 19:35:59.843887   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 19:35:59.843915   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 19:35:59.843925   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:59.844004   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:35:59.844019   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:59.849625   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 19:36:00.432638   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.078312   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:58.765060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.265033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.765033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.265144   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.764425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.764672   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.264962   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:03.264407   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.170365   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:01.668465   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.670347   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:01.294196   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.450154791s)
	I0829 19:36:01.294230   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 19:36:01.294273   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:36:01.294336   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:36:03.144937   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.850574318s)
	I0829 19:36:03.144978   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 19:36:03.145018   78865 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:36:03.145081   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:36:03.803763   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 19:36:03.803802   78865 cache_images.go:123] Successfully loaded all cached images
	I0829 19:36:03.803807   78865 cache_images.go:92] duration metric: took 14.831492974s to LoadCachedImages
	I0829 19:36:03.803818   78865 kubeadm.go:934] updating node { 192.168.39.76 8443 v1.31.0 crio true true} ...
	I0829 19:36:03.803927   78865 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-690795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:36:03.803988   78865 ssh_runner.go:195] Run: crio config
	I0829 19:36:03.854859   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:36:03.854879   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:36:03.854894   78865 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:36:03.854915   78865 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-690795 NodeName:no-preload-690795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:36:03.855055   78865 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-690795"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.76
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:36:03.855114   78865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:36:03.865163   78865 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:36:03.865236   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:36:03.874348   78865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0829 19:36:03.891540   78865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:36:03.908488   78865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0829 19:36:03.926440   78865 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0829 19:36:03.930270   78865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:36:03.942353   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:36:04.066646   78865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:36:04.083872   78865 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795 for IP: 192.168.39.76
	I0829 19:36:04.083901   78865 certs.go:194] generating shared ca certs ...
	I0829 19:36:04.083921   78865 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:36:04.084106   78865 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:36:04.084172   78865 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:36:04.084186   78865 certs.go:256] generating profile certs ...
	I0829 19:36:04.084307   78865 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/client.key
	I0829 19:36:04.084432   78865 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.key.8a2db174
	I0829 19:36:04.084492   78865 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.key
	I0829 19:36:04.084656   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:36:04.084705   78865 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:36:04.084718   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:36:04.084753   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:36:04.084790   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:36:04.084827   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:36:04.084883   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:36:04.085744   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:36:04.124689   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:36:04.158769   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:36:04.188748   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:36:04.217577   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:36:04.251166   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:36:04.282961   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:36:04.306431   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:36:04.329260   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:36:04.365050   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:36:04.393054   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:36:04.417384   78865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:36:04.434555   78865 ssh_runner.go:195] Run: openssl version
	I0829 19:36:04.440074   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:36:04.451378   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.455603   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.455655   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.461114   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:36:04.472522   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:36:04.483064   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.487316   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.487383   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.492860   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:36:04.504284   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:36:04.515522   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.519853   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.519908   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.525240   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:36:04.536612   78865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:36:04.540905   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:36:04.546622   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:36:04.552303   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:36:04.558306   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:36:04.564129   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:36:04.569635   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:36:04.575196   78865 kubeadm.go:392] StartCluster: {Name:no-preload-690795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:36:04.575279   78865 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:36:04.575360   78865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:36:04.619563   78865 cri.go:89] found id: ""
	I0829 19:36:04.619638   78865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:36:04.629655   78865 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:36:04.629675   78865 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:36:04.629785   78865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:36:04.638771   78865 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:36:04.639763   78865 kubeconfig.go:125] found "no-preload-690795" server: "https://192.168.39.76:8443"
	I0829 19:36:04.641783   78865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:36:04.650605   78865 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.76
	I0829 19:36:04.650634   78865 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:36:04.650644   78865 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:36:04.650693   78865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:36:04.685589   78865 cri.go:89] found id: ""
	I0829 19:36:04.685656   78865 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:36:04.702584   78865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:36:04.711693   78865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:36:04.711712   78865 kubeadm.go:157] found existing configuration files:
	
	I0829 19:36:04.711753   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:36:04.720291   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:36:04.720349   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:36:04.729301   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:36:04.739449   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:36:04.739513   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:36:04.748786   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:36:04.757128   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:36:04.757175   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:36:04.767533   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:36:04.777322   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:36:04.777373   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:36:04.786269   78865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:36:04.795387   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:04.904530   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:05.430803   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:07.431525   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.764403   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.764546   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.265205   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.764700   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.264837   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.764871   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.264506   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.765230   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:08.265050   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.169466   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:08.669719   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:05.750216   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:05.949551   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:06.043930   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:06.140396   78865 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:36:06.140505   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.641069   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.141458   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.161360   78865 api_server.go:72] duration metric: took 1.020963124s to wait for apiserver process to appear ...
	I0829 19:36:07.161390   78865 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:36:07.161426   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.327675   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:36:10.327707   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:36:10.327721   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.396704   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:10.396737   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:10.661699   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.666518   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:10.666544   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:11.162227   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:11.167736   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:11.167774   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:11.662428   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:11.668688   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:11.668727   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:12.162372   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:12.168297   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0829 19:36:12.175933   78865 api_server.go:141] control plane version: v1.31.0
	I0829 19:36:12.175956   78865 api_server.go:131] duration metric: took 5.014557664s to wait for apiserver health ...
	I0829 19:36:12.175967   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:36:12.175975   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:36:12.177903   78865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:36:09.930962   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:11.932180   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:08.764431   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.264876   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.764481   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.265100   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.764720   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.264283   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.764890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.264425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.764965   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:13.264557   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.669915   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:13.169150   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:12.179056   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:36:12.202639   78865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:36:12.221804   78865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:36:12.242859   78865 system_pods.go:59] 8 kube-system pods found
	I0829 19:36:12.242897   78865 system_pods.go:61] "coredns-6f6b679f8f-j8zzh" [01eaffa5-a976-441c-987c-bdf3b7f72cd6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:36:12.242905   78865 system_pods.go:61] "etcd-no-preload-690795" [df54ae59-44ff-4f7b-b6c0-6145bdae3e44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:36:12.242912   78865 system_pods.go:61] "kube-apiserver-no-preload-690795" [aee247f2-1381-4571-a671-2cf140c78196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:36:12.242919   78865 system_pods.go:61] "kube-controller-manager-no-preload-690795" [69244a85-2778-46c8-a95c-d0f8a264c0cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:36:12.242923   78865 system_pods.go:61] "kube-proxy-q4mbt" [985478f9-235d-4922-a7fd-a0cbdddf3f68] Running
	I0829 19:36:12.242934   78865 system_pods.go:61] "kube-scheduler-no-preload-690795" [e1e141ab-eb79-4c87-bccd-274f1e7495b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:36:12.242940   78865 system_pods.go:61] "metrics-server-6867b74b74-svnwn" [e096a3dc-1166-4ee3-9f3f-e044064a5a13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:36:12.242945   78865 system_pods.go:61] "storage-provisioner" [6fc868fa-2221-45ad-903e-cd3d2297a3e6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:36:12.242952   78865 system_pods.go:74] duration metric: took 21.125083ms to wait for pod list to return data ...
	I0829 19:36:12.242962   78865 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:36:12.253567   78865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:36:12.253598   78865 node_conditions.go:123] node cpu capacity is 2
	I0829 19:36:12.253612   78865 node_conditions.go:105] duration metric: took 10.645029ms to run NodePressure ...
	I0829 19:36:12.253634   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:12.514683   78865 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:36:12.520060   78865 kubeadm.go:739] kubelet initialised
	I0829 19:36:12.520082   78865 kubeadm.go:740] duration metric: took 5.371928ms waiting for restarted kubelet to initialise ...
	I0829 19:36:12.520088   78865 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:36:12.524795   78865 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:14.533484   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:14.430676   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:16.930723   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:13.765038   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.264547   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.764878   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.264485   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.765114   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.264694   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.764599   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.264540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.764523   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:18.264855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.668846   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:17.669308   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:17.031326   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:19.530568   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:19.430550   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:21.431080   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:23.431736   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:18.764781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.264280   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.764653   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.264908   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.764855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.265180   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.764470   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.264751   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.765034   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:23.264498   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.168590   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:22.168898   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:21.531983   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:22.032162   78865 pod_ready.go:93] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:22.032187   78865 pod_ready.go:82] duration metric: took 9.507358099s for pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:22.032200   78865 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.038935   78865 pod_ready.go:93] pod "etcd-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.038956   78865 pod_ready.go:82] duration metric: took 1.006750868s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.038966   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.043258   78865 pod_ready.go:93] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.043278   78865 pod_ready.go:82] duration metric: took 4.305789ms for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.043298   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.049140   78865 pod_ready.go:93] pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.049159   78865 pod_ready.go:82] duration metric: took 5.852855ms for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.049170   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-q4mbt" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.055033   78865 pod_ready.go:93] pod "kube-proxy-q4mbt" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.055054   78865 pod_ready.go:82] duration metric: took 5.87681ms for pod "kube-proxy-q4mbt" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.055067   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.229706   78865 pod_ready.go:93] pod "kube-scheduler-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.229734   78865 pod_ready.go:82] duration metric: took 174.6598ms for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.229748   78865 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:25.237141   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:25.930818   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.430312   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:23.764384   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.265090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.765183   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.264966   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.764429   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.264774   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.765090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.264524   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.764810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:28.264541   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.169024   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:26.169599   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.668840   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:27.736899   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:30.235632   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:30.430611   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.930362   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.764771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.764735   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.265228   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.764328   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.264312   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.764627   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.264891   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.765104   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:33.264462   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.669561   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.671106   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.236488   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:34.736240   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:34.931264   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:37.430665   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:33.764540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.265004   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.764934   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.264439   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.764982   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.264780   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.765081   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.264865   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.764612   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:37.764705   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:37.803674   79869 cri.go:89] found id: ""
	I0829 19:36:37.803704   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.803715   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:37.803724   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:37.803783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:37.836465   79869 cri.go:89] found id: ""
	I0829 19:36:37.836494   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.836504   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:37.836512   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:37.836574   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:37.870224   79869 cri.go:89] found id: ""
	I0829 19:36:37.870248   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.870256   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:37.870262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:37.870326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:37.904152   79869 cri.go:89] found id: ""
	I0829 19:36:37.904179   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.904187   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:37.904194   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:37.904267   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:37.939182   79869 cri.go:89] found id: ""
	I0829 19:36:37.939211   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.939220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:37.939228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:37.939293   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:37.975761   79869 cri.go:89] found id: ""
	I0829 19:36:37.975790   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.975800   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:37.975808   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:37.975910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:38.008407   79869 cri.go:89] found id: ""
	I0829 19:36:38.008430   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.008437   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:38.008444   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:38.008497   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:38.041327   79869 cri.go:89] found id: ""
	I0829 19:36:38.041360   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.041370   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:38.041381   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:38.041395   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:38.091167   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:38.091214   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:38.105093   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:38.105126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:38.227564   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:38.227599   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:38.227616   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:38.298287   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:38.298327   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:35.172336   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:37.671072   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:36.736855   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:38.736902   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:39.929907   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:41.930998   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:40.836221   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:40.849288   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:40.849357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:40.882705   79869 cri.go:89] found id: ""
	I0829 19:36:40.882732   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.882739   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:40.882745   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:40.882791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:40.917639   79869 cri.go:89] found id: ""
	I0829 19:36:40.917667   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.917679   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:40.917687   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:40.917738   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:40.953804   79869 cri.go:89] found id: ""
	I0829 19:36:40.953843   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.953854   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:40.953863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:40.953925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:40.987341   79869 cri.go:89] found id: ""
	I0829 19:36:40.987376   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.987388   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:40.987396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:40.987462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:41.026247   79869 cri.go:89] found id: ""
	I0829 19:36:41.026277   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.026290   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:41.026303   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:41.026372   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:41.064160   79869 cri.go:89] found id: ""
	I0829 19:36:41.064185   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.064194   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:41.064201   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:41.064278   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:41.115081   79869 cri.go:89] found id: ""
	I0829 19:36:41.115113   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.115124   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:41.115131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:41.115206   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:41.165472   79869 cri.go:89] found id: ""
	I0829 19:36:41.165501   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.165511   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:41.165521   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:41.165536   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:41.219322   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:41.219357   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:41.232410   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:41.232443   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:41.296216   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:41.296235   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:41.296246   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:41.375784   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:41.375824   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:40.169548   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:42.672996   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:41.236777   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.736150   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.931489   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:45.933439   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:48.431152   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.914181   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:43.926643   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:43.926716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:43.963266   79869 cri.go:89] found id: ""
	I0829 19:36:43.963289   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.963297   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:43.963303   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:43.963350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:43.998886   79869 cri.go:89] found id: ""
	I0829 19:36:43.998917   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.998926   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:43.998930   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:43.998975   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:44.033142   79869 cri.go:89] found id: ""
	I0829 19:36:44.033174   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.033183   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:44.033189   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:44.033244   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:44.066986   79869 cri.go:89] found id: ""
	I0829 19:36:44.067019   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.067031   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:44.067038   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:44.067106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:44.100228   79869 cri.go:89] found id: ""
	I0829 19:36:44.100261   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.100272   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:44.100279   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:44.100340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:44.134511   79869 cri.go:89] found id: ""
	I0829 19:36:44.134536   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.134543   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:44.134549   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:44.134615   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:44.170586   79869 cri.go:89] found id: ""
	I0829 19:36:44.170619   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.170631   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:44.170639   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:44.170692   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:44.205349   79869 cri.go:89] found id: ""
	I0829 19:36:44.205377   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.205388   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:44.205398   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:44.205413   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:44.218874   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:44.218903   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:44.294221   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:44.294241   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:44.294253   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:44.373258   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:44.373293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:44.414355   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:44.414384   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:46.964371   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:46.976756   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:46.976827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:47.009512   79869 cri.go:89] found id: ""
	I0829 19:36:47.009537   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.009547   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:47.009555   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:47.009608   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:47.042141   79869 cri.go:89] found id: ""
	I0829 19:36:47.042177   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.042190   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:47.042199   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:47.042265   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:47.074680   79869 cri.go:89] found id: ""
	I0829 19:36:47.074707   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.074718   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:47.074726   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:47.074783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:47.107014   79869 cri.go:89] found id: ""
	I0829 19:36:47.107042   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.107051   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:47.107059   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:47.107107   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:47.139770   79869 cri.go:89] found id: ""
	I0829 19:36:47.139795   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.139804   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:47.139810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:47.139862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:47.174463   79869 cri.go:89] found id: ""
	I0829 19:36:47.174502   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.174521   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:47.174532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:47.174580   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:47.206935   79869 cri.go:89] found id: ""
	I0829 19:36:47.206958   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.206966   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:47.206972   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:47.207035   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:47.250798   79869 cri.go:89] found id: ""
	I0829 19:36:47.250822   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.250829   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:47.250836   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:47.250847   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:47.320803   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:47.320824   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:47.320850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:47.394344   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:47.394379   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:47.439451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:47.439481   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:47.491070   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:47.491106   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:45.169686   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:47.169784   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:46.236187   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:48.736605   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.431543   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:52.931361   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.006196   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:50.020169   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:50.020259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:50.059323   79869 cri.go:89] found id: ""
	I0829 19:36:50.059353   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.059373   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:50.059380   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:50.059442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:50.095389   79869 cri.go:89] found id: ""
	I0829 19:36:50.095419   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.095430   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:50.095437   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:50.095499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:50.128133   79869 cri.go:89] found id: ""
	I0829 19:36:50.128162   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.128173   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:50.128180   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:50.128238   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:50.160999   79869 cri.go:89] found id: ""
	I0829 19:36:50.161021   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.161030   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:50.161035   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:50.161081   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:50.195246   79869 cri.go:89] found id: ""
	I0829 19:36:50.195268   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.195276   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:50.195282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:50.195329   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:50.229232   79869 cri.go:89] found id: ""
	I0829 19:36:50.229263   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.229273   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:50.229280   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:50.229340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:50.265141   79869 cri.go:89] found id: ""
	I0829 19:36:50.265169   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.265180   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:50.265188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:50.265251   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:50.299896   79869 cri.go:89] found id: ""
	I0829 19:36:50.299928   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.299940   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:50.299949   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:50.299963   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:50.313408   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:50.313431   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:50.382019   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:50.382037   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:50.382049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:50.462174   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:50.462211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:50.499944   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:50.499971   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.050299   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:53.064866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:53.064963   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:53.098468   79869 cri.go:89] found id: ""
	I0829 19:36:53.098492   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.098500   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:53.098506   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:53.098555   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:53.130323   79869 cri.go:89] found id: ""
	I0829 19:36:53.130354   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.130377   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:53.130385   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:53.130445   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:53.175911   79869 cri.go:89] found id: ""
	I0829 19:36:53.175941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.175951   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:53.175968   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:53.176033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:53.209834   79869 cri.go:89] found id: ""
	I0829 19:36:53.209865   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.209874   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:53.209881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:53.209959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:53.246277   79869 cri.go:89] found id: ""
	I0829 19:36:53.246322   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.246332   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:53.246340   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:53.246401   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:53.283911   79869 cri.go:89] found id: ""
	I0829 19:36:53.283941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.283953   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:53.283962   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:53.284024   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:53.315217   79869 cri.go:89] found id: ""
	I0829 19:36:53.315247   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.315257   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:53.315265   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:53.315328   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:53.348341   79869 cri.go:89] found id: ""
	I0829 19:36:53.348392   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.348405   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:53.348417   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:53.348436   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.399841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:53.399879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:53.414453   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:53.414491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:53.490003   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:53.490023   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:53.490042   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:53.565162   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:53.565198   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:49.669984   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:52.168756   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.736642   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:53.236282   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:55.430710   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:57.430791   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:56.106051   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:56.119263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:56.119345   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:56.160104   79869 cri.go:89] found id: ""
	I0829 19:36:56.160131   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.160138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:56.160144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:56.160192   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:56.196028   79869 cri.go:89] found id: ""
	I0829 19:36:56.196054   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.196062   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:56.196067   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:56.196113   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:56.229503   79869 cri.go:89] found id: ""
	I0829 19:36:56.229532   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.229539   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:56.229553   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:56.229602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:56.263904   79869 cri.go:89] found id: ""
	I0829 19:36:56.263934   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.263944   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:56.263951   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:56.264013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:56.295579   79869 cri.go:89] found id: ""
	I0829 19:36:56.295607   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.295618   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:56.295625   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:56.295680   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:56.328514   79869 cri.go:89] found id: ""
	I0829 19:36:56.328548   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.328556   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:56.328563   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:56.328620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:56.361388   79869 cri.go:89] found id: ""
	I0829 19:36:56.361418   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.361426   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:56.361431   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:56.361508   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:56.393312   79869 cri.go:89] found id: ""
	I0829 19:36:56.393345   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.393354   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:56.393362   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:56.393372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:56.446431   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:56.446472   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:56.459086   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:56.459112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:56.525526   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:56.525554   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:56.525569   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:56.609554   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:56.609592   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:54.169625   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:56.169688   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:58.170249   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:55.736101   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:58.235887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:00.236133   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:59.931992   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.430785   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:59.148291   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:59.162462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:59.162524   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:59.199732   79869 cri.go:89] found id: ""
	I0829 19:36:59.199761   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.199771   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:59.199780   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:59.199861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:59.232285   79869 cri.go:89] found id: ""
	I0829 19:36:59.232324   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.232335   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:59.232345   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:59.232415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:59.266424   79869 cri.go:89] found id: ""
	I0829 19:36:59.266452   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.266463   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:59.266471   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:59.266536   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:59.306707   79869 cri.go:89] found id: ""
	I0829 19:36:59.306733   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.306742   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:59.306748   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:59.306807   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:59.345114   79869 cri.go:89] found id: ""
	I0829 19:36:59.345144   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.345154   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:59.345162   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:59.345225   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:59.382940   79869 cri.go:89] found id: ""
	I0829 19:36:59.382963   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.382971   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:59.382977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:59.383031   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:59.420066   79869 cri.go:89] found id: ""
	I0829 19:36:59.420088   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.420095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:59.420101   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:59.420146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:59.457355   79869 cri.go:89] found id: ""
	I0829 19:36:59.457377   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.457385   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:59.457392   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:59.457409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:59.528868   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:59.528893   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:59.528908   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:59.612849   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:59.612886   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:59.649036   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:59.649064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:59.703071   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:59.703105   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.216020   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:02.229270   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:02.229351   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:02.266857   79869 cri.go:89] found id: ""
	I0829 19:37:02.266885   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.266897   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:02.266904   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:02.266967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:02.304473   79869 cri.go:89] found id: ""
	I0829 19:37:02.304501   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.304512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:02.304520   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:02.304590   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:02.338219   79869 cri.go:89] found id: ""
	I0829 19:37:02.338244   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.338253   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:02.338261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:02.338323   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:02.370974   79869 cri.go:89] found id: ""
	I0829 19:37:02.371006   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.371017   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:02.371025   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:02.371084   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:02.405871   79869 cri.go:89] found id: ""
	I0829 19:37:02.405895   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.405902   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:02.405908   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:02.405955   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:02.438516   79869 cri.go:89] found id: ""
	I0829 19:37:02.438543   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.438554   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:02.438568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:02.438630   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:02.471180   79869 cri.go:89] found id: ""
	I0829 19:37:02.471205   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.471213   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:02.471218   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:02.471276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:02.503203   79869 cri.go:89] found id: ""
	I0829 19:37:02.503227   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.503237   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:02.503248   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:02.503262   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:02.555303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:02.555337   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.567903   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:02.567927   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:02.641377   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:02.641403   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:02.641418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:02.717475   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:02.717522   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:00.669482   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.669691   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.237155   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:04.237334   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:04.431033   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:06.431419   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.431901   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:05.257326   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:05.270641   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:05.270717   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:05.303873   79869 cri.go:89] found id: ""
	I0829 19:37:05.303901   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.303909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:05.303915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:05.303959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:05.345153   79869 cri.go:89] found id: ""
	I0829 19:37:05.345176   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.345184   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:05.345189   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:05.345245   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:05.379032   79869 cri.go:89] found id: ""
	I0829 19:37:05.379059   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.379067   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:05.379073   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:05.379135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:05.412432   79869 cri.go:89] found id: ""
	I0829 19:37:05.412465   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.412476   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:05.412484   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:05.412538   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:05.445441   79869 cri.go:89] found id: ""
	I0829 19:37:05.445464   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.445471   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:05.445477   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:05.445527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:05.478921   79869 cri.go:89] found id: ""
	I0829 19:37:05.478949   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.478957   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:05.478964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:05.479011   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:05.509821   79869 cri.go:89] found id: ""
	I0829 19:37:05.509849   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.509859   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:05.509866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:05.509924   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:05.541409   79869 cri.go:89] found id: ""
	I0829 19:37:05.541435   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.541443   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:05.541451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:05.541464   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.590569   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:05.590601   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:05.604071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:05.604101   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:05.685233   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:05.685262   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:05.685277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:05.761082   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:05.761112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.299816   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:08.312964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:08.313037   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:08.344710   79869 cri.go:89] found id: ""
	I0829 19:37:08.344737   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.344745   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:08.344755   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:08.344820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:08.378185   79869 cri.go:89] found id: ""
	I0829 19:37:08.378210   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.378217   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:08.378223   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:08.378272   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:08.410619   79869 cri.go:89] found id: ""
	I0829 19:37:08.410645   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.410663   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:08.410670   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:08.410729   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:08.445494   79869 cri.go:89] found id: ""
	I0829 19:37:08.445522   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.445531   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:08.445540   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:08.445601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:08.478225   79869 cri.go:89] found id: ""
	I0829 19:37:08.478249   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.478258   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:08.478263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:08.478311   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:08.512006   79869 cri.go:89] found id: ""
	I0829 19:37:08.512032   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.512042   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:08.512049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:08.512111   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:08.546800   79869 cri.go:89] found id: ""
	I0829 19:37:08.546831   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.546841   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:08.546848   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:08.546911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:08.580353   79869 cri.go:89] found id: ""
	I0829 19:37:08.580383   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.580394   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:08.580405   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:08.580418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:08.661004   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:08.661041   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.708548   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:08.708581   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.168832   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:07.669695   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:06.736029   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.736415   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:10.930895   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:13.430209   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.761385   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:08.761418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:08.774365   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:08.774392   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:08.839864   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.340781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:11.353417   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:11.353492   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:11.388836   79869 cri.go:89] found id: ""
	I0829 19:37:11.388864   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.388873   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:11.388879   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:11.388925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:11.429655   79869 cri.go:89] found id: ""
	I0829 19:37:11.429685   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.429695   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:11.429703   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:11.429761   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:11.462122   79869 cri.go:89] found id: ""
	I0829 19:37:11.462157   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.462166   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:11.462174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:11.462236   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:11.495955   79869 cri.go:89] found id: ""
	I0829 19:37:11.495985   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.495996   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:11.496003   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:11.496063   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:11.529394   79869 cri.go:89] found id: ""
	I0829 19:37:11.529427   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.529438   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:11.529446   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:11.529513   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:11.565804   79869 cri.go:89] found id: ""
	I0829 19:37:11.565830   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.565838   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:11.565844   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:11.565903   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:11.601786   79869 cri.go:89] found id: ""
	I0829 19:37:11.601815   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.601825   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:11.601832   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:11.601889   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:11.638213   79869 cri.go:89] found id: ""
	I0829 19:37:11.638234   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.638242   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:11.638250   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:11.638260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:11.651085   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:11.651113   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:11.716834   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.716858   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:11.716872   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:11.804266   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:11.804310   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:11.846655   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:11.846684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:10.168947   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:12.669439   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:11.236100   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:13.236138   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:15.930954   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.931355   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:14.408512   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:14.420973   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:14.421033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:14.456516   79869 cri.go:89] found id: ""
	I0829 19:37:14.456540   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.456548   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:14.456553   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:14.456604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:14.489480   79869 cri.go:89] found id: ""
	I0829 19:37:14.489502   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.489512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:14.489517   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:14.489562   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:14.521821   79869 cri.go:89] found id: ""
	I0829 19:37:14.521849   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.521857   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:14.521863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:14.521911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:14.557084   79869 cri.go:89] found id: ""
	I0829 19:37:14.557116   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.557125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:14.557131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:14.557180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:14.590979   79869 cri.go:89] found id: ""
	I0829 19:37:14.591009   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.591019   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:14.591027   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:14.591088   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:14.624022   79869 cri.go:89] found id: ""
	I0829 19:37:14.624047   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.624057   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:14.624066   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:14.624131   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:14.656100   79869 cri.go:89] found id: ""
	I0829 19:37:14.656133   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.656145   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:14.656153   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:14.656214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:14.694241   79869 cri.go:89] found id: ""
	I0829 19:37:14.694276   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.694289   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:14.694302   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:14.694317   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.748276   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:14.748312   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:14.761340   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:14.761361   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:14.834815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:14.834842   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:14.834857   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:14.909857   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:14.909898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.453264   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:17.466704   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:17.466776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:17.500163   79869 cri.go:89] found id: ""
	I0829 19:37:17.500193   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.500205   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:17.500212   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:17.500269   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:17.532155   79869 cri.go:89] found id: ""
	I0829 19:37:17.532182   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.532192   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:17.532200   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:17.532259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:17.564710   79869 cri.go:89] found id: ""
	I0829 19:37:17.564737   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.564747   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:17.564754   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:17.564816   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:17.597056   79869 cri.go:89] found id: ""
	I0829 19:37:17.597091   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.597103   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:17.597111   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:17.597173   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:17.633398   79869 cri.go:89] found id: ""
	I0829 19:37:17.633424   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.633434   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:17.633442   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:17.633506   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:17.666201   79869 cri.go:89] found id: ""
	I0829 19:37:17.666243   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.666254   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:17.666262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:17.666324   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:17.700235   79869 cri.go:89] found id: ""
	I0829 19:37:17.700259   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.700266   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:17.700273   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:17.700320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:17.732060   79869 cri.go:89] found id: ""
	I0829 19:37:17.732090   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.732100   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:17.732110   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:17.732126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:17.747071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:17.747107   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:17.816644   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:17.816665   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:17.816677   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:17.895084   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:17.895134   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.935093   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:17.935125   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.669895   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.170115   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:15.736101   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.736304   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:19.736492   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:20.429878   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:22.430233   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:20.484693   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:20.497977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:20.498043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:20.531361   79869 cri.go:89] found id: ""
	I0829 19:37:20.531389   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.531400   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:20.531408   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:20.531469   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:20.569556   79869 cri.go:89] found id: ""
	I0829 19:37:20.569583   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.569594   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:20.569603   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:20.569668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:20.602350   79869 cri.go:89] found id: ""
	I0829 19:37:20.602377   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.602385   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:20.602391   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:20.602448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:20.637274   79869 cri.go:89] found id: ""
	I0829 19:37:20.637305   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.637319   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:20.637327   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:20.637388   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:20.686169   79869 cri.go:89] found id: ""
	I0829 19:37:20.686196   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.686204   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:20.686210   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:20.686257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:20.722745   79869 cri.go:89] found id: ""
	I0829 19:37:20.722775   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.722786   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:20.722794   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:20.722856   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:20.757314   79869 cri.go:89] found id: ""
	I0829 19:37:20.757337   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.757344   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:20.757349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:20.757398   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:20.790802   79869 cri.go:89] found id: ""
	I0829 19:37:20.790834   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.790844   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:20.790855   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:20.790870   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:20.840866   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:20.840898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:20.854053   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:20.854098   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:20.921717   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:20.921746   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:20.921761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:21.003362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:21.003398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:23.541356   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:23.554621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:23.554699   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:23.588155   79869 cri.go:89] found id: ""
	I0829 19:37:23.588190   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.588199   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:23.588207   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:23.588273   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:23.622917   79869 cri.go:89] found id: ""
	I0829 19:37:23.622945   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.622954   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:23.622960   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:23.623016   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:23.658615   79869 cri.go:89] found id: ""
	I0829 19:37:23.658648   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.658657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:23.658663   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:23.658720   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:23.693196   79869 cri.go:89] found id: ""
	I0829 19:37:23.693224   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.693234   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:23.693242   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:23.693309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:23.728285   79869 cri.go:89] found id: ""
	I0829 19:37:23.728317   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.728328   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:23.728336   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:23.728399   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:19.668651   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:21.669949   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:23.670402   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:22.235749   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:24.236078   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:24.431492   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:26.930440   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:23.763713   79869 cri.go:89] found id: ""
	I0829 19:37:23.763741   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.763751   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:23.763759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:23.763812   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:23.797776   79869 cri.go:89] found id: ""
	I0829 19:37:23.797801   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.797809   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:23.797814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:23.797863   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:23.832108   79869 cri.go:89] found id: ""
	I0829 19:37:23.832139   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.832151   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:23.832161   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:23.832175   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:23.880460   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:23.880490   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:23.893251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:23.893280   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:23.962079   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:23.962127   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:23.962140   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:24.048048   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:24.048088   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:26.593169   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:26.606349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:26.606426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:26.643119   79869 cri.go:89] found id: ""
	I0829 19:37:26.643143   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.643155   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:26.643161   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:26.643216   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:26.681555   79869 cri.go:89] found id: ""
	I0829 19:37:26.681579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.681591   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:26.681597   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:26.681655   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:26.718440   79869 cri.go:89] found id: ""
	I0829 19:37:26.718469   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.718479   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:26.718486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:26.718549   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:26.755249   79869 cri.go:89] found id: ""
	I0829 19:37:26.755274   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.755284   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:26.755292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:26.755356   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:26.790554   79869 cri.go:89] found id: ""
	I0829 19:37:26.790579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.790590   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:26.790597   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:26.790665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:26.826492   79869 cri.go:89] found id: ""
	I0829 19:37:26.826521   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.826530   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:26.826537   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:26.826600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:26.863456   79869 cri.go:89] found id: ""
	I0829 19:37:26.863487   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.863499   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:26.863508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:26.863579   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:26.897637   79869 cri.go:89] found id: ""
	I0829 19:37:26.897670   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.897683   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:26.897694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:26.897709   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:26.978362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:26.978400   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:27.016212   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:27.016245   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:27.078350   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:27.078386   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:27.101701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:27.101744   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:27.186720   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:26.168605   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:28.170938   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:26.735518   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:28.737503   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:29.431222   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:31.931202   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:29.686902   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:29.699814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:29.699885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:29.733867   79869 cri.go:89] found id: ""
	I0829 19:37:29.733893   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.733904   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:29.733911   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:29.733970   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:29.767910   79869 cri.go:89] found id: ""
	I0829 19:37:29.767937   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.767946   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:29.767952   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:29.767998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:29.801085   79869 cri.go:89] found id: ""
	I0829 19:37:29.801109   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.801117   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:29.801122   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:29.801166   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:29.834215   79869 cri.go:89] found id: ""
	I0829 19:37:29.834238   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.834246   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:29.834251   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:29.834307   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:29.872761   79869 cri.go:89] found id: ""
	I0829 19:37:29.872785   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.872793   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:29.872803   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:29.872847   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:29.909354   79869 cri.go:89] found id: ""
	I0829 19:37:29.909385   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.909395   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:29.909408   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:29.909468   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:29.941359   79869 cri.go:89] found id: ""
	I0829 19:37:29.941383   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.941390   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:29.941396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:29.941451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:29.973694   79869 cri.go:89] found id: ""
	I0829 19:37:29.973726   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.973736   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:29.973746   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:29.973761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:30.024863   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:30.024896   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.039092   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:30.039119   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:30.106106   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:30.106128   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:30.106143   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:30.183254   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:30.183289   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:32.722665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:32.736188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:32.736261   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:32.773039   79869 cri.go:89] found id: ""
	I0829 19:37:32.773065   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.773073   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:32.773082   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:32.773144   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:32.818204   79869 cri.go:89] found id: ""
	I0829 19:37:32.818234   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.818245   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:32.818252   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:32.818313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:32.862902   79869 cri.go:89] found id: ""
	I0829 19:37:32.862932   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.862942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:32.862949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:32.863009   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:32.908338   79869 cri.go:89] found id: ""
	I0829 19:37:32.908369   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.908380   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:32.908388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:32.908452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:32.941717   79869 cri.go:89] found id: ""
	I0829 19:37:32.941746   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.941757   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:32.941765   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:32.941827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:32.975777   79869 cri.go:89] found id: ""
	I0829 19:37:32.975806   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.975818   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:32.975827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:32.975885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:33.007518   79869 cri.go:89] found id: ""
	I0829 19:37:33.007551   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.007563   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:33.007570   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:33.007638   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:33.039902   79869 cri.go:89] found id: ""
	I0829 19:37:33.039924   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.039931   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:33.039946   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:33.039958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:33.111691   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:33.111720   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:33.111734   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:33.191036   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:33.191067   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:33.228850   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:33.228882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:33.282314   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:33.282351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.668490   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:32.669630   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:31.235788   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:33.735661   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:33.931996   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.932964   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:38.429817   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.796597   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:35.809357   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:35.809437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:35.841747   79869 cri.go:89] found id: ""
	I0829 19:37:35.841774   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.841783   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:35.841792   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:35.841850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:35.875614   79869 cri.go:89] found id: ""
	I0829 19:37:35.875639   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.875650   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:35.875657   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:35.875718   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:35.910547   79869 cri.go:89] found id: ""
	I0829 19:37:35.910571   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.910579   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:35.910585   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:35.910647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:35.949505   79869 cri.go:89] found id: ""
	I0829 19:37:35.949526   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.949533   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:35.949538   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:35.949583   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:35.984331   79869 cri.go:89] found id: ""
	I0829 19:37:35.984369   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.984381   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:35.984388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:35.984451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:36.018870   79869 cri.go:89] found id: ""
	I0829 19:37:36.018897   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.018909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:36.018917   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:36.018976   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:36.053557   79869 cri.go:89] found id: ""
	I0829 19:37:36.053593   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.053603   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:36.053611   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:36.053668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:36.087217   79869 cri.go:89] found id: ""
	I0829 19:37:36.087243   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.087254   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:36.087264   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:36.087282   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:36.141546   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:36.141577   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:36.155496   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:36.155524   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:36.225014   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:36.225038   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:36.225052   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:36.304399   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:36.304442   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:35.168843   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:37.169415   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.736103   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:37.736554   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:40.235995   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:40.430698   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.430836   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:38.842368   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:38.856085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:38.856160   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:38.893989   79869 cri.go:89] found id: ""
	I0829 19:37:38.894016   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.894024   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:38.894030   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:38.894075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:38.926756   79869 cri.go:89] found id: ""
	I0829 19:37:38.926784   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.926792   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:38.926798   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:38.926859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:38.966346   79869 cri.go:89] found id: ""
	I0829 19:37:38.966370   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.966379   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:38.966385   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:38.966442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:39.000266   79869 cri.go:89] found id: ""
	I0829 19:37:39.000291   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.000298   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:39.000307   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:39.000355   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:39.037243   79869 cri.go:89] found id: ""
	I0829 19:37:39.037269   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.037277   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:39.037282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:39.037347   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:39.068823   79869 cri.go:89] found id: ""
	I0829 19:37:39.068852   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.068864   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:39.068872   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:39.068936   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:39.099649   79869 cri.go:89] found id: ""
	I0829 19:37:39.099674   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.099682   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:39.099689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:39.099748   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:39.131764   79869 cri.go:89] found id: ""
	I0829 19:37:39.131786   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.131794   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:39.131802   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:39.131814   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:39.188087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:39.188123   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:39.200989   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:39.201015   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:39.279230   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:39.279257   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:39.279271   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:39.358667   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:39.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:41.897833   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:41.911145   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:41.911219   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:41.947096   79869 cri.go:89] found id: ""
	I0829 19:37:41.947122   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.947133   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:41.947141   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:41.947203   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:41.984267   79869 cri.go:89] found id: ""
	I0829 19:37:41.984301   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.984309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:41.984315   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:41.984384   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:42.018170   79869 cri.go:89] found id: ""
	I0829 19:37:42.018198   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.018209   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:42.018217   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:42.018281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:42.058245   79869 cri.go:89] found id: ""
	I0829 19:37:42.058269   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.058278   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:42.058283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:42.058327   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:42.093182   79869 cri.go:89] found id: ""
	I0829 19:37:42.093214   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.093226   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:42.093233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:42.093299   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:42.126013   79869 cri.go:89] found id: ""
	I0829 19:37:42.126041   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.126050   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:42.126058   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:42.126136   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:42.166568   79869 cri.go:89] found id: ""
	I0829 19:37:42.166660   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.166675   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:42.166683   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:42.166763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:42.204904   79869 cri.go:89] found id: ""
	I0829 19:37:42.204930   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.204938   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:42.204947   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:42.204960   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:42.262487   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:42.262533   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:42.275703   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:42.275730   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:42.341375   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:42.341394   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:42.341408   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:42.420981   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:42.421021   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:39.670059   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.169724   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.237785   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.736417   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.929743   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:46.930603   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.965267   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:44.979151   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:44.979204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:45.020423   79869 cri.go:89] found id: ""
	I0829 19:37:45.020448   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.020456   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:45.020461   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:45.020521   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:45.058200   79869 cri.go:89] found id: ""
	I0829 19:37:45.058225   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.058233   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:45.058238   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:45.058286   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:45.093886   79869 cri.go:89] found id: ""
	I0829 19:37:45.093909   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.093917   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:45.093923   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:45.093968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:45.127630   79869 cri.go:89] found id: ""
	I0829 19:37:45.127663   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.127674   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:45.127681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:45.127742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:45.160643   79869 cri.go:89] found id: ""
	I0829 19:37:45.160669   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.160679   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:45.160685   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:45.160742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:45.196010   79869 cri.go:89] found id: ""
	I0829 19:37:45.196035   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.196043   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:45.196050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:45.196101   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:45.229297   79869 cri.go:89] found id: ""
	I0829 19:37:45.229375   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.229395   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:45.229405   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:45.229461   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:45.267244   79869 cri.go:89] found id: ""
	I0829 19:37:45.267271   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.267281   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:45.267292   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:45.267306   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:45.280179   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:45.280201   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:45.352318   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:45.352339   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:45.352351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:45.432702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:45.432732   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:45.470540   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:45.470564   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.019771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:48.032745   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:48.032819   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:48.066895   79869 cri.go:89] found id: ""
	I0829 19:37:48.066921   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.066930   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:48.066938   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:48.066998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:48.104824   79869 cri.go:89] found id: ""
	I0829 19:37:48.104853   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.104861   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:48.104866   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:48.104931   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:48.140964   79869 cri.go:89] found id: ""
	I0829 19:37:48.140990   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.140998   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:48.141004   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:48.141051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:48.174550   79869 cri.go:89] found id: ""
	I0829 19:37:48.174578   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.174587   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:48.174593   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:48.174647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:48.207397   79869 cri.go:89] found id: ""
	I0829 19:37:48.207422   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.207430   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:48.207437   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:48.207495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:48.240948   79869 cri.go:89] found id: ""
	I0829 19:37:48.240970   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.240978   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:48.240983   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:48.241027   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:48.281058   79869 cri.go:89] found id: ""
	I0829 19:37:48.281087   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.281095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:48.281100   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:48.281151   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:48.315511   79869 cri.go:89] found id: ""
	I0829 19:37:48.315541   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.315552   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:48.315564   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:48.315580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.367680   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:48.367714   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:48.380251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:48.380285   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:48.449432   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:48.449452   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:48.449467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:48.525529   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:48.525563   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:44.669068   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:47.169440   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:46.737461   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:49.236079   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:49.431026   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.931134   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.064580   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:51.077351   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:51.077430   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:51.110018   79869 cri.go:89] found id: ""
	I0829 19:37:51.110049   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.110058   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:51.110063   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:51.110138   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:51.143667   79869 cri.go:89] found id: ""
	I0829 19:37:51.143700   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.143711   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:51.143719   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:51.143791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:51.178193   79869 cri.go:89] found id: ""
	I0829 19:37:51.178221   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.178229   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:51.178235   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:51.178285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:51.212323   79869 cri.go:89] found id: ""
	I0829 19:37:51.212352   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.212359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:51.212366   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:51.212413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:51.245724   79869 cri.go:89] found id: ""
	I0829 19:37:51.245745   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.245752   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:51.245758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:51.245832   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:51.278424   79869 cri.go:89] found id: ""
	I0829 19:37:51.278448   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.278456   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:51.278462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:51.278509   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:51.309469   79869 cri.go:89] found id: ""
	I0829 19:37:51.309498   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.309508   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:51.309516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:51.309602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:51.342596   79869 cri.go:89] found id: ""
	I0829 19:37:51.342625   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.342639   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:51.342650   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:51.342664   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:51.394045   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:51.394083   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:51.407902   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:51.407934   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:51.480759   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:51.480782   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:51.480797   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:51.565533   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:51.565570   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:49.671574   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:52.168702   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.237371   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:53.736122   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:54.430278   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.431024   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:54.107142   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:54.121083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:54.121141   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:54.156019   79869 cri.go:89] found id: ""
	I0829 19:37:54.156042   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.156050   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:54.156056   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:54.156106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:54.188748   79869 cri.go:89] found id: ""
	I0829 19:37:54.188772   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.188783   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:54.188790   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:54.188851   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:54.222044   79869 cri.go:89] found id: ""
	I0829 19:37:54.222079   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.222112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:54.222132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:54.222214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:54.254710   79869 cri.go:89] found id: ""
	I0829 19:37:54.254740   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.254750   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:54.254759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:54.254820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:54.292053   79869 cri.go:89] found id: ""
	I0829 19:37:54.292078   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.292086   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:54.292092   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:54.292153   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:54.330528   79869 cri.go:89] found id: ""
	I0829 19:37:54.330561   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.330573   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:54.330580   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:54.330653   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:54.363571   79869 cri.go:89] found id: ""
	I0829 19:37:54.363594   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.363602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:54.363608   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:54.363669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:54.395112   79869 cri.go:89] found id: ""
	I0829 19:37:54.395144   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.395166   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:54.395178   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:54.395192   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:54.408701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:54.408729   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:54.474198   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:54.474218   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:54.474231   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:54.555430   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:54.555467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.592858   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:54.592893   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.144165   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:57.157368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:57.157437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:57.194662   79869 cri.go:89] found id: ""
	I0829 19:37:57.194693   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.194706   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:57.194721   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:57.194784   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:57.226822   79869 cri.go:89] found id: ""
	I0829 19:37:57.226848   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.226856   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:57.226862   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:57.226910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:57.263892   79869 cri.go:89] found id: ""
	I0829 19:37:57.263932   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.263945   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:57.263955   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:57.264018   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:57.301202   79869 cri.go:89] found id: ""
	I0829 19:37:57.301243   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.301255   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:57.301261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:57.301317   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:57.335291   79869 cri.go:89] found id: ""
	I0829 19:37:57.335321   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.335337   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:57.335343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:57.335392   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:57.368961   79869 cri.go:89] found id: ""
	I0829 19:37:57.368983   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.368992   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:57.368997   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:57.369042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:57.401813   79869 cri.go:89] found id: ""
	I0829 19:37:57.401837   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.401844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:57.401850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:57.401906   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:57.434719   79869 cri.go:89] found id: ""
	I0829 19:37:57.434745   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.434756   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:57.434765   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:57.434777   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.484182   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:57.484217   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:57.497025   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:57.497051   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:57.569752   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:57.569776   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:57.569789   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:57.651276   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:57.651324   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.169824   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.668831   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.236564   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:58.736176   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:58.930996   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.931806   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.430980   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.189981   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:00.204723   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:00.204794   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:00.241677   79869 cri.go:89] found id: ""
	I0829 19:38:00.241700   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.241707   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:00.241713   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:00.241768   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:00.278692   79869 cri.go:89] found id: ""
	I0829 19:38:00.278726   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.278736   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:00.278744   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:00.278801   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:00.310418   79869 cri.go:89] found id: ""
	I0829 19:38:00.310448   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.310459   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:00.310466   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:00.310528   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:00.348423   79869 cri.go:89] found id: ""
	I0829 19:38:00.348446   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.348453   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:00.348459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:00.348511   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:00.380954   79869 cri.go:89] found id: ""
	I0829 19:38:00.380978   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.380985   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:00.380991   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:00.381043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:00.414783   79869 cri.go:89] found id: ""
	I0829 19:38:00.414812   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.414823   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:00.414831   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:00.414895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:00.450606   79869 cri.go:89] found id: ""
	I0829 19:38:00.450634   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.450642   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:00.450647   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:00.450696   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:00.485337   79869 cri.go:89] found id: ""
	I0829 19:38:00.485360   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.485375   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:00.485382   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:00.485399   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:00.551481   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:00.551502   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:00.551513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:00.630781   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:00.630819   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:00.676339   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:00.676363   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:00.728420   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:00.728452   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.243268   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:03.256259   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:03.256359   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:03.291103   79869 cri.go:89] found id: ""
	I0829 19:38:03.291131   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.291138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:03.291144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:03.291190   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:03.327866   79869 cri.go:89] found id: ""
	I0829 19:38:03.327898   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.327909   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:03.327917   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:03.327986   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:03.359082   79869 cri.go:89] found id: ""
	I0829 19:38:03.359110   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.359121   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:03.359129   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:03.359183   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:03.392714   79869 cri.go:89] found id: ""
	I0829 19:38:03.392741   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.392751   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:03.392758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:03.392823   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:03.427785   79869 cri.go:89] found id: ""
	I0829 19:38:03.427812   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.427820   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:03.427827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:03.427888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:03.463136   79869 cri.go:89] found id: ""
	I0829 19:38:03.463161   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.463171   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:03.463177   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:03.463230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:03.496188   79869 cri.go:89] found id: ""
	I0829 19:38:03.496225   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.496237   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:03.496244   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:03.496295   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:03.529566   79869 cri.go:89] found id: ""
	I0829 19:38:03.529591   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.529600   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:03.529609   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:03.529619   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:03.584787   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:03.584828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.599464   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:03.599509   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:03.676743   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:03.676763   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:03.676774   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:59.169059   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:01.668656   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.669716   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.736901   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.236263   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:05.431293   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:07.930953   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.757552   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:03.757605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.297887   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:06.311413   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:06.311498   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:06.345494   79869 cri.go:89] found id: ""
	I0829 19:38:06.345529   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.345539   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:06.345546   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:06.345605   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:06.377646   79869 cri.go:89] found id: ""
	I0829 19:38:06.377680   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.377691   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:06.377698   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:06.377809   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:06.416770   79869 cri.go:89] found id: ""
	I0829 19:38:06.416799   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.416810   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:06.416817   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:06.416869   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:06.451995   79869 cri.go:89] found id: ""
	I0829 19:38:06.452024   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.452034   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:06.452040   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:06.452095   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:06.484604   79869 cri.go:89] found id: ""
	I0829 19:38:06.484631   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.484642   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:06.484650   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:06.484713   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:06.517955   79869 cri.go:89] found id: ""
	I0829 19:38:06.517981   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.517988   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:06.517994   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:06.518053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:06.551069   79869 cri.go:89] found id: ""
	I0829 19:38:06.551100   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.551111   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:06.551118   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:06.551178   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:06.585340   79869 cri.go:89] found id: ""
	I0829 19:38:06.585367   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.585379   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:06.585389   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:06.585416   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:06.637942   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:06.637977   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:06.652097   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:06.652124   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:06.738226   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:06.738252   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:06.738268   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:06.817478   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:06.817519   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.168530   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:08.169657   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:05.736429   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:08.236731   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:09.931677   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.431484   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:09.360441   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:09.373372   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:09.373431   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:09.409942   79869 cri.go:89] found id: ""
	I0829 19:38:09.409970   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.409981   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:09.409989   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:09.410050   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:09.444611   79869 cri.go:89] found id: ""
	I0829 19:38:09.444639   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.444647   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:09.444652   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:09.444701   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:09.478206   79869 cri.go:89] found id: ""
	I0829 19:38:09.478233   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.478240   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:09.478246   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:09.478305   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:09.510313   79869 cri.go:89] found id: ""
	I0829 19:38:09.510340   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.510356   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:09.510361   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:09.510419   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:09.545380   79869 cri.go:89] found id: ""
	I0829 19:38:09.545412   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.545422   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:09.545429   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:09.545495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:09.578560   79869 cri.go:89] found id: ""
	I0829 19:38:09.578591   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.578600   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:09.578606   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:09.578659   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:09.613445   79869 cri.go:89] found id: ""
	I0829 19:38:09.613476   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.613484   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:09.613490   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:09.613540   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:09.649933   79869 cri.go:89] found id: ""
	I0829 19:38:09.649961   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.649970   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:09.649981   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:09.649998   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:09.662471   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:09.662496   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:09.728562   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:09.728594   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:09.728610   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:09.813152   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:09.813187   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:09.852846   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:09.852879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.403437   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:12.429787   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:12.429872   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:12.470833   79869 cri.go:89] found id: ""
	I0829 19:38:12.470858   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.470866   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:12.470871   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:12.470947   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:12.502307   79869 cri.go:89] found id: ""
	I0829 19:38:12.502334   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.502343   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:12.502351   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:12.502411   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:12.535084   79869 cri.go:89] found id: ""
	I0829 19:38:12.535108   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.535114   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:12.535120   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:12.535182   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:12.571735   79869 cri.go:89] found id: ""
	I0829 19:38:12.571762   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.571772   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:12.571779   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:12.571838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:12.604987   79869 cri.go:89] found id: ""
	I0829 19:38:12.605020   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.605029   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:12.605036   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:12.605093   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:12.639257   79869 cri.go:89] found id: ""
	I0829 19:38:12.639281   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.639289   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:12.639300   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:12.639362   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:12.674790   79869 cri.go:89] found id: ""
	I0829 19:38:12.674811   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.674818   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:12.674824   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:12.674877   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:12.711132   79869 cri.go:89] found id: ""
	I0829 19:38:12.711156   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.711164   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:12.711172   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:12.711184   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.763916   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:12.763950   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:12.777071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:12.777100   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:12.844974   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:12.845002   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:12.845017   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:12.924646   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:12.924682   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:10.668769   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.669771   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:10.736651   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.737433   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:15.236521   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:14.930832   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:16.931496   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:15.465319   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:15.478237   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:15.478315   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:15.510066   79869 cri.go:89] found id: ""
	I0829 19:38:15.510113   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.510124   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:15.510132   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:15.510180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:15.543094   79869 cri.go:89] found id: ""
	I0829 19:38:15.543117   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.543125   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:15.543138   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:15.543189   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:15.577253   79869 cri.go:89] found id: ""
	I0829 19:38:15.577279   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.577286   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:15.577292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:15.577352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:15.612073   79869 cri.go:89] found id: ""
	I0829 19:38:15.612107   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.612119   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:15.612128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:15.612196   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:15.645565   79869 cri.go:89] found id: ""
	I0829 19:38:15.645587   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.645595   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:15.645601   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:15.645646   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:15.679991   79869 cri.go:89] found id: ""
	I0829 19:38:15.680018   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.680027   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:15.680033   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:15.680109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:15.713899   79869 cri.go:89] found id: ""
	I0829 19:38:15.713923   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.713931   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:15.713937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:15.713991   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:15.750559   79869 cri.go:89] found id: ""
	I0829 19:38:15.750590   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.750601   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:15.750613   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:15.750628   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:15.762918   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:15.762943   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:15.832171   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:15.832195   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:15.832211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:15.913268   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:15.913311   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:15.951909   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:15.951935   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:18.501587   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:18.514136   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:18.514198   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:18.546937   79869 cri.go:89] found id: ""
	I0829 19:38:18.546977   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.546986   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:18.546994   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:18.547059   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:18.579227   79869 cri.go:89] found id: ""
	I0829 19:38:18.579256   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.579267   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:18.579275   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:18.579350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:18.610639   79869 cri.go:89] found id: ""
	I0829 19:38:18.610665   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.610673   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:18.610678   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:18.610739   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:18.642646   79869 cri.go:89] found id: ""
	I0829 19:38:18.642672   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.642680   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:18.642689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:18.642744   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:18.678244   79869 cri.go:89] found id: ""
	I0829 19:38:18.678264   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.678271   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:18.678277   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:18.678341   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:18.709787   79869 cri.go:89] found id: ""
	I0829 19:38:18.709812   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.709820   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:18.709826   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:18.709876   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:14.669989   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:17.169402   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:17.736005   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:20.236887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:19.430240   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:21.930946   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:18.743570   79869 cri.go:89] found id: ""
	I0829 19:38:18.743593   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.743602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:18.743610   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:18.743671   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:18.776790   79869 cri.go:89] found id: ""
	I0829 19:38:18.776815   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.776823   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:18.776831   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:18.776842   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:18.791736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:18.791765   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:18.880815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:18.880835   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:18.880849   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:18.969263   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:18.969304   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:19.005813   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:19.005843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.559810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:21.572617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:21.572682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:21.606221   79869 cri.go:89] found id: ""
	I0829 19:38:21.606245   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.606253   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:21.606259   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:21.606310   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:21.637794   79869 cri.go:89] found id: ""
	I0829 19:38:21.637822   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.637830   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:21.637835   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:21.637888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:21.671484   79869 cri.go:89] found id: ""
	I0829 19:38:21.671505   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.671515   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:21.671521   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:21.671576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:21.707212   79869 cri.go:89] found id: ""
	I0829 19:38:21.707240   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.707250   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:21.707257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:21.707320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:21.742944   79869 cri.go:89] found id: ""
	I0829 19:38:21.742964   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.742971   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:21.742977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:21.743023   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:21.779919   79869 cri.go:89] found id: ""
	I0829 19:38:21.779940   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.779947   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:21.779952   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:21.780007   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:21.819817   79869 cri.go:89] found id: ""
	I0829 19:38:21.819848   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.819858   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:21.819866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:21.819926   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:21.853791   79869 cri.go:89] found id: ""
	I0829 19:38:21.853817   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.853825   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:21.853833   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:21.853843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:21.890519   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:21.890550   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.943940   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:21.943972   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:21.956697   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:21.956724   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:22.030470   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:22.030495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:22.030513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:19.170077   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:21.670142   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:23.672076   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:22.237387   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:24.737069   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:23.934621   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:26.430632   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:24.608719   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:24.624343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:24.624403   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:24.679480   79869 cri.go:89] found id: ""
	I0829 19:38:24.679507   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.679514   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:24.679520   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:24.679589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:24.714065   79869 cri.go:89] found id: ""
	I0829 19:38:24.714114   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.714127   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:24.714134   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:24.714194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:24.751382   79869 cri.go:89] found id: ""
	I0829 19:38:24.751408   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.751417   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:24.751422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:24.751481   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:24.783549   79869 cri.go:89] found id: ""
	I0829 19:38:24.783573   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.783580   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:24.783588   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:24.783643   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:24.815500   79869 cri.go:89] found id: ""
	I0829 19:38:24.815524   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.815532   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:24.815539   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:24.815594   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:24.848225   79869 cri.go:89] found id: ""
	I0829 19:38:24.848249   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.848258   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:24.848264   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:24.848321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:24.880473   79869 cri.go:89] found id: ""
	I0829 19:38:24.880500   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.880511   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:24.880520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:24.880587   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:24.912907   79869 cri.go:89] found id: ""
	I0829 19:38:24.912941   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.912959   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:24.912967   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:24.912996   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:24.985389   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:24.985420   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:24.985437   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:25.060555   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:25.060591   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:25.099073   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:25.099099   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:25.149434   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:25.149473   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:27.664027   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:27.677971   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:27.678042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:27.715124   79869 cri.go:89] found id: ""
	I0829 19:38:27.715166   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.715179   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:27.715188   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:27.715255   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:27.748316   79869 cri.go:89] found id: ""
	I0829 19:38:27.748348   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.748361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:27.748370   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:27.748439   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:27.782075   79869 cri.go:89] found id: ""
	I0829 19:38:27.782116   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.782128   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:27.782137   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:27.782194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:27.821517   79869 cri.go:89] found id: ""
	I0829 19:38:27.821545   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.821554   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:27.821562   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:27.821621   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:27.853619   79869 cri.go:89] found id: ""
	I0829 19:38:27.853643   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.853654   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:27.853668   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:27.853723   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:27.886790   79869 cri.go:89] found id: ""
	I0829 19:38:27.886814   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.886822   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:27.886828   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:27.886883   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:27.920756   79869 cri.go:89] found id: ""
	I0829 19:38:27.920779   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.920789   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:27.920802   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:27.920861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:27.959241   79869 cri.go:89] found id: ""
	I0829 19:38:27.959267   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.959279   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:27.959289   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:27.959302   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:27.999922   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:27.999945   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:28.050616   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:28.050655   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:28.066437   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:28.066470   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:28.137427   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:28.137451   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:28.137466   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:26.168927   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:28.169453   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:27.235855   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:29.236537   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:28.929913   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:30.930403   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:32.931280   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:30.721890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:30.736387   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:30.736462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:30.773230   79869 cri.go:89] found id: ""
	I0829 19:38:30.773290   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.773304   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:30.773315   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:30.773382   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:30.806234   79869 cri.go:89] found id: ""
	I0829 19:38:30.806261   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.806271   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:30.806279   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:30.806344   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:30.841608   79869 cri.go:89] found id: ""
	I0829 19:38:30.841650   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.841674   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:30.841684   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:30.841751   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:30.875926   79869 cri.go:89] found id: ""
	I0829 19:38:30.875952   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.875960   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:30.875966   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:30.876020   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:30.914312   79869 cri.go:89] found id: ""
	I0829 19:38:30.914334   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.914341   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:30.914347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:30.914406   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:30.948819   79869 cri.go:89] found id: ""
	I0829 19:38:30.948854   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.948865   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:30.948876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:30.948937   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:30.980573   79869 cri.go:89] found id: ""
	I0829 19:38:30.980606   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.980617   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:30.980627   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:30.980688   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:31.012024   79869 cri.go:89] found id: ""
	I0829 19:38:31.012052   79869 logs.go:276] 0 containers: []
	W0829 19:38:31.012061   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:31.012071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:31.012089   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:31.076870   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:31.076896   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:31.076907   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:31.156257   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:31.156293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:31.192883   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:31.192911   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:31.246303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:31.246342   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:30.169738   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:32.669256   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:31.736303   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:34.235284   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:35.430450   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:37.931562   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:33.760372   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:33.773924   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:33.773998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:33.810019   79869 cri.go:89] found id: ""
	I0829 19:38:33.810047   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.810057   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:33.810064   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:33.810146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:33.848706   79869 cri.go:89] found id: ""
	I0829 19:38:33.848735   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.848747   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:33.848754   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:33.848822   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:33.880689   79869 cri.go:89] found id: ""
	I0829 19:38:33.880718   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.880731   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:33.880739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:33.880803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:33.911962   79869 cri.go:89] found id: ""
	I0829 19:38:33.911990   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.912000   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:33.912008   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:33.912071   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:33.948432   79869 cri.go:89] found id: ""
	I0829 19:38:33.948457   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.948468   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:33.948474   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:33.948534   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:33.981818   79869 cri.go:89] found id: ""
	I0829 19:38:33.981851   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.981859   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:33.981866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:33.981923   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:34.022072   79869 cri.go:89] found id: ""
	I0829 19:38:34.022108   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.022118   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:34.022125   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:34.022185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:34.055881   79869 cri.go:89] found id: ""
	I0829 19:38:34.055909   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.055920   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:34.055930   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:34.055944   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:34.133046   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:34.133079   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:34.175426   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:34.175457   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:34.228789   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:34.228825   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:34.243272   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:34.243322   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:34.318761   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:36.819665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:36.832516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:36.832604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:36.866781   79869 cri.go:89] found id: ""
	I0829 19:38:36.866815   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.866826   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:36.866833   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:36.866895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:36.903289   79869 cri.go:89] found id: ""
	I0829 19:38:36.903319   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.903329   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:36.903335   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:36.903383   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:36.936691   79869 cri.go:89] found id: ""
	I0829 19:38:36.936714   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.936722   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:36.936727   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:36.936776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:36.969496   79869 cri.go:89] found id: ""
	I0829 19:38:36.969525   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.969535   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:36.969541   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:36.969589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:37.001683   79869 cri.go:89] found id: ""
	I0829 19:38:37.001707   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.001715   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:37.001720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:37.001765   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:37.041189   79869 cri.go:89] found id: ""
	I0829 19:38:37.041212   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.041223   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:37.041231   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:37.041281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:37.077041   79869 cri.go:89] found id: ""
	I0829 19:38:37.077067   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.077075   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:37.077080   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:37.077135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:37.110478   79869 cri.go:89] found id: ""
	I0829 19:38:37.110506   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.110514   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:37.110523   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:37.110535   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:37.162560   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:37.162598   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:37.176466   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:37.176491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:37.244843   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:37.244861   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:37.244874   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:37.323324   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:37.323362   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:35.169023   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:37.668411   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:36.236332   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:38.236971   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:40.237468   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:39.932147   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.430752   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:39.864755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:39.877730   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:39.877789   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:39.909828   79869 cri.go:89] found id: ""
	I0829 19:38:39.909864   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.909874   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:39.909880   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:39.909941   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:39.943492   79869 cri.go:89] found id: ""
	I0829 19:38:39.943513   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.943521   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:39.943528   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:39.943586   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:39.976346   79869 cri.go:89] found id: ""
	I0829 19:38:39.976382   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.976393   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:39.976401   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:39.976455   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:40.008764   79869 cri.go:89] found id: ""
	I0829 19:38:40.008793   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.008803   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:40.008810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:40.008871   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:40.040324   79869 cri.go:89] found id: ""
	I0829 19:38:40.040356   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.040373   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:40.040381   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:40.040448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:40.072836   79869 cri.go:89] found id: ""
	I0829 19:38:40.072867   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.072880   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:40.072888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:40.072938   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:40.105437   79869 cri.go:89] found id: ""
	I0829 19:38:40.105462   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.105470   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:40.105476   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:40.105520   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:40.139447   79869 cri.go:89] found id: ""
	I0829 19:38:40.139480   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.139491   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:40.139502   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:40.139517   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.177799   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:40.177828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:40.227087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:40.227118   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:40.241116   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:40.241139   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:40.305556   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:40.305576   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:40.305590   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:42.886493   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:42.900941   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:42.901013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:42.938904   79869 cri.go:89] found id: ""
	I0829 19:38:42.938925   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.938933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:42.938946   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:42.939012   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:42.975186   79869 cri.go:89] found id: ""
	I0829 19:38:42.975213   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.975221   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:42.975227   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:42.975288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:43.009115   79869 cri.go:89] found id: ""
	I0829 19:38:43.009144   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.009152   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:43.009157   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:43.009207   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:43.044948   79869 cri.go:89] found id: ""
	I0829 19:38:43.044977   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.044987   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:43.044995   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:43.045057   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:43.079699   79869 cri.go:89] found id: ""
	I0829 19:38:43.079725   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.079732   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:43.079739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:43.079804   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:43.113742   79869 cri.go:89] found id: ""
	I0829 19:38:43.113770   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.113780   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:43.113788   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:43.113850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:43.151852   79869 cri.go:89] found id: ""
	I0829 19:38:43.151876   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.151884   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:43.151889   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:43.151939   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:43.190832   79869 cri.go:89] found id: ""
	I0829 19:38:43.190854   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.190862   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:43.190869   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:43.190882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:43.242651   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:43.242683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:43.256378   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:43.256403   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:43.333657   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:43.333684   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:43.333696   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:43.409811   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:43.409850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.170246   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.669492   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.737831   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:45.236831   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:44.930652   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:46.930941   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:45.947709   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:45.960937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:45.961013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:45.993198   79869 cri.go:89] found id: ""
	I0829 19:38:45.993230   79869 logs.go:276] 0 containers: []
	W0829 19:38:45.993242   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:45.993249   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:45.993303   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:46.031110   79869 cri.go:89] found id: ""
	I0829 19:38:46.031137   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.031148   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:46.031157   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:46.031212   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:46.065062   79869 cri.go:89] found id: ""
	I0829 19:38:46.065085   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.065093   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:46.065099   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:46.065155   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:46.099092   79869 cri.go:89] found id: ""
	I0829 19:38:46.099114   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.099122   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:46.099128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:46.099177   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:46.132426   79869 cri.go:89] found id: ""
	I0829 19:38:46.132450   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.132459   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:46.132464   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:46.132517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:46.165289   79869 cri.go:89] found id: ""
	I0829 19:38:46.165320   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.165337   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:46.165346   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:46.165415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:46.198761   79869 cri.go:89] found id: ""
	I0829 19:38:46.198786   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.198793   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:46.198799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:46.198859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:46.230621   79869 cri.go:89] found id: ""
	I0829 19:38:46.230649   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.230659   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:46.230669   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:46.230683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:46.280364   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:46.280398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:46.292854   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:46.292878   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:46.358673   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:46.358694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:46.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:46.439653   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:46.439688   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:44.669939   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:47.168670   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:47.735386   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:49.736163   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:49.431741   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:51.931271   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:48.975568   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:48.988793   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:48.988857   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:49.023697   79869 cri.go:89] found id: ""
	I0829 19:38:49.023721   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.023730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:49.023736   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:49.023791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:49.060131   79869 cri.go:89] found id: ""
	I0829 19:38:49.060153   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.060160   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:49.060166   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:49.060222   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:49.096069   79869 cri.go:89] found id: ""
	I0829 19:38:49.096101   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.096112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:49.096119   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:49.096185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:49.130316   79869 cri.go:89] found id: ""
	I0829 19:38:49.130347   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.130359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:49.130367   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:49.130434   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:49.162853   79869 cri.go:89] found id: ""
	I0829 19:38:49.162877   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.162890   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:49.162896   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:49.162956   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:49.198555   79869 cri.go:89] found id: ""
	I0829 19:38:49.198581   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.198592   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:49.198598   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:49.198663   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:49.232521   79869 cri.go:89] found id: ""
	I0829 19:38:49.232550   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.232560   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:49.232568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:49.232626   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:49.268094   79869 cri.go:89] found id: ""
	I0829 19:38:49.268124   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.268134   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:49.268145   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:49.268161   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:49.320884   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:49.320918   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:49.334244   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:49.334273   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:49.404442   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.404464   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:49.404479   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:49.482413   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:49.482451   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.021406   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:52.035517   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:52.035600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:52.068868   79869 cri.go:89] found id: ""
	I0829 19:38:52.068902   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.068909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:52.068915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:52.068971   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:52.100503   79869 cri.go:89] found id: ""
	I0829 19:38:52.100533   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.100542   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:52.100548   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:52.100620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:52.135148   79869 cri.go:89] found id: ""
	I0829 19:38:52.135189   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.135201   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:52.135208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:52.135276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:52.174469   79869 cri.go:89] found id: ""
	I0829 19:38:52.174498   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.174508   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:52.174516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:52.174576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:52.206485   79869 cri.go:89] found id: ""
	I0829 19:38:52.206508   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.206515   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:52.206520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:52.206568   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:52.240053   79869 cri.go:89] found id: ""
	I0829 19:38:52.240073   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.240080   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:52.240085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:52.240143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:52.274473   79869 cri.go:89] found id: ""
	I0829 19:38:52.274497   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.274506   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:52.274513   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:52.274576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:52.306646   79869 cri.go:89] found id: ""
	I0829 19:38:52.306669   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.306678   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:52.306686   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:52.306698   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:52.383558   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:52.383615   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.421958   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:52.421988   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:52.478024   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:52.478059   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:52.490736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:52.490772   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:52.555670   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.169856   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:51.669655   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:52.236654   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:54.735292   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:53.931350   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.430287   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:58.432418   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:55.056273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:55.068074   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:55.068147   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:55.102268   79869 cri.go:89] found id: ""
	I0829 19:38:55.102298   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.102309   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:55.102317   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:55.102368   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:55.133730   79869 cri.go:89] found id: ""
	I0829 19:38:55.133763   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.133773   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:55.133784   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:55.133848   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:55.168902   79869 cri.go:89] found id: ""
	I0829 19:38:55.168932   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.168942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:55.168949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:55.169015   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:55.206190   79869 cri.go:89] found id: ""
	I0829 19:38:55.206220   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.206231   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:55.206241   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:55.206326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:55.240178   79869 cri.go:89] found id: ""
	I0829 19:38:55.240213   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.240224   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:55.240233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:55.240313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:55.272532   79869 cri.go:89] found id: ""
	I0829 19:38:55.272559   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.272569   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:55.272575   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:55.272636   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:55.305427   79869 cri.go:89] found id: ""
	I0829 19:38:55.305457   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.305467   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:55.305473   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:55.305522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:55.337444   79869 cri.go:89] found id: ""
	I0829 19:38:55.337477   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.337489   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:55.337502   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:55.337518   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:55.402988   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:55.403019   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:55.403034   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:55.479168   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:55.479202   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:55.516345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:55.516372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:55.566716   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:55.566749   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.080261   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:58.093884   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:58.093944   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:58.126772   79869 cri.go:89] found id: ""
	I0829 19:38:58.126799   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.126808   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:58.126814   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:58.126861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:58.158344   79869 cri.go:89] found id: ""
	I0829 19:38:58.158373   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.158385   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:58.158393   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:58.158458   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:58.191524   79869 cri.go:89] found id: ""
	I0829 19:38:58.191550   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.191561   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:58.191569   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:58.191635   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:58.223336   79869 cri.go:89] found id: ""
	I0829 19:38:58.223362   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.223370   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:58.223375   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:58.223433   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:58.256223   79869 cri.go:89] found id: ""
	I0829 19:38:58.256248   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.256256   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:58.256262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:58.256321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:58.290008   79869 cri.go:89] found id: ""
	I0829 19:38:58.290035   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.290044   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:58.290049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:58.290112   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:58.324441   79869 cri.go:89] found id: ""
	I0829 19:38:58.324471   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.324488   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:58.324495   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:58.324554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:58.357324   79869 cri.go:89] found id: ""
	I0829 19:38:58.357351   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.357361   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:58.357378   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:58.357394   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.370251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:58.370277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:58.461098   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:58.461123   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:58.461138   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:58.537222   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:58.537255   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:58.574012   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:58.574043   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:54.170237   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.668188   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:58.668309   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.736467   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:59.236483   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:00.930424   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:02.931161   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:01.125646   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:01.138389   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:01.138464   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:01.172278   79869 cri.go:89] found id: ""
	I0829 19:39:01.172305   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.172313   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:01.172319   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:01.172375   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:01.207408   79869 cri.go:89] found id: ""
	I0829 19:39:01.207444   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.207455   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:01.207462   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:01.207522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:01.242683   79869 cri.go:89] found id: ""
	I0829 19:39:01.242711   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.242721   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:01.242729   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:01.242788   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:01.275683   79869 cri.go:89] found id: ""
	I0829 19:39:01.275714   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.275730   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:01.275738   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:01.275803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:01.308039   79869 cri.go:89] found id: ""
	I0829 19:39:01.308063   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.308071   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:01.308078   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:01.308137   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:01.344382   79869 cri.go:89] found id: ""
	I0829 19:39:01.344406   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.344413   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:01.344418   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:01.344466   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:01.379942   79869 cri.go:89] found id: ""
	I0829 19:39:01.379964   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.379972   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:01.379977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:01.380021   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:01.414955   79869 cri.go:89] found id: ""
	I0829 19:39:01.414981   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.414989   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:01.414997   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:01.415008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:01.469174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:01.469206   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:01.482719   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:01.482743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:01.546713   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:01.546731   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:01.546742   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:01.630655   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:01.630689   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:00.668839   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:02.670762   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:01.236788   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:03.237406   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:05.430398   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.431044   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:04.167940   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:04.180881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:04.180948   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:04.214782   79869 cri.go:89] found id: ""
	I0829 19:39:04.214809   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.214818   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:04.214824   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:04.214878   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:04.248274   79869 cri.go:89] found id: ""
	I0829 19:39:04.248300   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.248309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:04.248316   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:04.248378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:04.280622   79869 cri.go:89] found id: ""
	I0829 19:39:04.280648   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.280657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:04.280681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:04.280749   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:04.313715   79869 cri.go:89] found id: ""
	I0829 19:39:04.313746   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.313754   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:04.313759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:04.313806   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:04.345179   79869 cri.go:89] found id: ""
	I0829 19:39:04.345201   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.345209   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:04.345214   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:04.345264   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:04.377264   79869 cri.go:89] found id: ""
	I0829 19:39:04.377294   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.377304   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:04.377315   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:04.377378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:04.410005   79869 cri.go:89] found id: ""
	I0829 19:39:04.410028   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.410034   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:04.410039   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:04.410109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:04.444345   79869 cri.go:89] found id: ""
	I0829 19:39:04.444373   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.444383   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:04.444393   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:04.444409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:04.488071   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:04.488103   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:04.539394   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:04.539427   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:04.552285   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:04.552320   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:04.620973   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:04.620992   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:04.621006   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.201149   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:07.213392   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:07.213452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:07.249778   79869 cri.go:89] found id: ""
	I0829 19:39:07.249801   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.249812   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:07.249817   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:07.249864   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:07.282763   79869 cri.go:89] found id: ""
	I0829 19:39:07.282792   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.282799   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:07.282805   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:07.282852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:07.316882   79869 cri.go:89] found id: ""
	I0829 19:39:07.316920   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.316932   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:07.316940   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:07.316990   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:07.348474   79869 cri.go:89] found id: ""
	I0829 19:39:07.348505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.348516   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:07.348532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:07.348606   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:07.381442   79869 cri.go:89] found id: ""
	I0829 19:39:07.381467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.381474   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:07.381479   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:07.381535   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:07.414935   79869 cri.go:89] found id: ""
	I0829 19:39:07.414968   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.414981   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:07.414990   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:07.415053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:07.448427   79869 cri.go:89] found id: ""
	I0829 19:39:07.448467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.448479   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:07.448486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:07.448544   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:07.480475   79869 cri.go:89] found id: ""
	I0829 19:39:07.480505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.480515   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:07.480526   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:07.480540   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:07.532732   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:07.532766   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:07.546366   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:07.546411   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:07.615661   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:07.615679   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:07.615690   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.696874   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:07.696909   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:05.169920   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.170223   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:05.735375   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.737017   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:10.235794   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:09.930945   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:11.931285   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:10.236118   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:10.249347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:10.249413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:10.280412   79869 cri.go:89] found id: ""
	I0829 19:39:10.280436   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.280446   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:10.280451   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:10.280499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:10.313091   79869 cri.go:89] found id: ""
	I0829 19:39:10.313119   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.313126   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:10.313132   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:10.313187   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:10.347208   79869 cri.go:89] found id: ""
	I0829 19:39:10.347243   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.347252   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:10.347257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:10.347306   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:10.380658   79869 cri.go:89] found id: ""
	I0829 19:39:10.380686   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.380696   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:10.380703   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:10.380750   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:10.412573   79869 cri.go:89] found id: ""
	I0829 19:39:10.412601   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.412613   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:10.412621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:10.412682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:10.449655   79869 cri.go:89] found id: ""
	I0829 19:39:10.449683   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.449691   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:10.449698   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:10.449759   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:10.485157   79869 cri.go:89] found id: ""
	I0829 19:39:10.485184   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.485195   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:10.485203   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:10.485262   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:10.522628   79869 cri.go:89] found id: ""
	I0829 19:39:10.522656   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.522666   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:10.522673   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:10.522684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:10.541079   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:10.541114   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:10.633462   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:10.633495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:10.633512   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:10.714315   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:10.714354   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:10.751345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:10.751371   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.306786   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:13.319368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:13.319447   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:13.353999   79869 cri.go:89] found id: ""
	I0829 19:39:13.354029   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.354039   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:13.354047   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:13.354124   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:13.386953   79869 cri.go:89] found id: ""
	I0829 19:39:13.386982   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.386992   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:13.387000   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:13.387053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:13.425835   79869 cri.go:89] found id: ""
	I0829 19:39:13.425860   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.425869   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:13.425876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:13.425942   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:13.462808   79869 cri.go:89] found id: ""
	I0829 19:39:13.462835   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.462843   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:13.462849   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:13.462895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:13.495194   79869 cri.go:89] found id: ""
	I0829 19:39:13.495228   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.495240   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:13.495248   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:13.495309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:13.527239   79869 cri.go:89] found id: ""
	I0829 19:39:13.527268   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.527277   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:13.527283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:13.527357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:13.559081   79869 cri.go:89] found id: ""
	I0829 19:39:13.559110   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.559121   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:13.559128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:13.559191   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:13.590723   79869 cri.go:89] found id: ""
	I0829 19:39:13.590748   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.590757   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:13.590767   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:13.590781   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.645718   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:13.645751   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:13.659224   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:13.659250   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:13.733532   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:13.733566   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:13.733580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:09.669065   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:12.169167   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:12.236756   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:14.237536   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:14.431203   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:16.930983   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:13.813639   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:13.813680   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.355269   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:16.377328   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:16.377395   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:16.437904   79869 cri.go:89] found id: ""
	I0829 19:39:16.437926   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.437933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:16.437939   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:16.437987   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:16.470254   79869 cri.go:89] found id: ""
	I0829 19:39:16.470279   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.470287   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:16.470293   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:16.470353   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:16.502125   79869 cri.go:89] found id: ""
	I0829 19:39:16.502165   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.502177   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:16.502186   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:16.502242   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:16.539754   79869 cri.go:89] found id: ""
	I0829 19:39:16.539781   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.539791   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:16.539799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:16.539862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:16.576191   79869 cri.go:89] found id: ""
	I0829 19:39:16.576218   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.576229   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:16.576236   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:16.576292   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:16.610183   79869 cri.go:89] found id: ""
	I0829 19:39:16.610208   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.610219   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:16.610226   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:16.610285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:16.642568   79869 cri.go:89] found id: ""
	I0829 19:39:16.642605   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.642614   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:16.642624   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:16.642689   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:16.675990   79869 cri.go:89] found id: ""
	I0829 19:39:16.676017   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.676025   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:16.676033   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:16.676049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:16.739204   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:16.739222   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:16.739233   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:16.816427   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:16.816460   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.851816   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:16.851850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:16.903922   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:16.903958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:14.169307   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:16.163640   79073 pod_ready.go:82] duration metric: took 4m0.000694226s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" ...
	E0829 19:39:16.163683   79073 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:39:16.163706   79073 pod_ready.go:39] duration metric: took 4m12.036045825s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:16.163738   79073 kubeadm.go:597] duration metric: took 4m20.35086556s to restartPrimaryControlPlane
	W0829 19:39:16.163795   79073 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:16.163827   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:16.736978   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.236047   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.431674   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:21.930447   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.418163   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:19.432617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:19.432676   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:19.464691   79869 cri.go:89] found id: ""
	I0829 19:39:19.464718   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.464730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:19.464737   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:19.464793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:19.496265   79869 cri.go:89] found id: ""
	I0829 19:39:19.496291   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.496302   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:19.496310   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:19.496397   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:19.527395   79869 cri.go:89] found id: ""
	I0829 19:39:19.527422   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.527433   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:19.527440   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:19.527501   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:19.558377   79869 cri.go:89] found id: ""
	I0829 19:39:19.558404   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.558414   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:19.558422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:19.558484   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:19.589687   79869 cri.go:89] found id: ""
	I0829 19:39:19.589710   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.589718   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:19.589724   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:19.589813   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:19.624051   79869 cri.go:89] found id: ""
	I0829 19:39:19.624077   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.624086   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:19.624097   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:19.624143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:19.656248   79869 cri.go:89] found id: ""
	I0829 19:39:19.656282   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.656293   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:19.656301   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:19.656364   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:19.689299   79869 cri.go:89] found id: ""
	I0829 19:39:19.689328   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.689338   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:19.689349   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:19.689365   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:19.739952   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:19.739982   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:19.753169   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:19.753197   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:19.816948   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:19.816971   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:19.816983   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:19.892233   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:19.892270   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.432456   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:22.444842   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:22.444915   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:22.475864   79869 cri.go:89] found id: ""
	I0829 19:39:22.475888   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.475899   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:22.475907   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:22.475954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:22.506824   79869 cri.go:89] found id: ""
	I0829 19:39:22.506851   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.506858   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:22.506864   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:22.506909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:22.544960   79869 cri.go:89] found id: ""
	I0829 19:39:22.544984   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.545002   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:22.545009   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:22.545074   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:22.584077   79869 cri.go:89] found id: ""
	I0829 19:39:22.584098   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.584106   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:22.584114   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:22.584169   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:22.621180   79869 cri.go:89] found id: ""
	I0829 19:39:22.621208   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.621220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:22.621228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:22.621288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:22.658111   79869 cri.go:89] found id: ""
	I0829 19:39:22.658139   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.658151   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:22.658158   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:22.658220   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:22.695654   79869 cri.go:89] found id: ""
	I0829 19:39:22.695679   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.695686   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:22.695692   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:22.695742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:22.733092   79869 cri.go:89] found id: ""
	I0829 19:39:22.733169   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.733184   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:22.733196   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:22.733212   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:22.808449   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:22.808469   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:22.808485   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:22.889239   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:22.889275   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.933487   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:22.933513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:22.983137   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:22.983178   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:21.236189   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:23.236347   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:25.237213   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:23.932634   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:26.431145   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:28.431496   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:25.496668   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:25.509508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:25.509572   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:25.544292   79869 cri.go:89] found id: ""
	I0829 19:39:25.544321   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.544334   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:25.544341   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:25.544400   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:25.576739   79869 cri.go:89] found id: ""
	I0829 19:39:25.576768   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.576779   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:25.576787   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:25.576840   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:25.608040   79869 cri.go:89] found id: ""
	I0829 19:39:25.608067   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.608075   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:25.608081   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:25.608127   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:25.639675   79869 cri.go:89] found id: ""
	I0829 19:39:25.639703   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.639712   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:25.639720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:25.639785   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:25.676966   79869 cri.go:89] found id: ""
	I0829 19:39:25.676995   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.677007   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:25.677014   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:25.677075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:25.712310   79869 cri.go:89] found id: ""
	I0829 19:39:25.712334   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.712341   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:25.712347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:25.712393   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:25.746172   79869 cri.go:89] found id: ""
	I0829 19:39:25.746196   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.746203   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:25.746208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:25.746257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:25.778476   79869 cri.go:89] found id: ""
	I0829 19:39:25.778497   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.778506   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:25.778514   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:25.778525   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:25.817791   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:25.817820   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:25.874597   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:25.874634   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:25.887469   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:25.887493   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:25.957308   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:25.957329   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:25.957348   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:28.536826   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:28.550981   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:28.551038   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:28.586607   79869 cri.go:89] found id: ""
	I0829 19:39:28.586636   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.586647   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:28.586656   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:28.586716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:28.627696   79869 cri.go:89] found id: ""
	I0829 19:39:28.627720   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.627728   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:28.627734   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:28.627793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:28.659877   79869 cri.go:89] found id: ""
	I0829 19:39:28.659906   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.659915   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:28.659920   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:28.659967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:28.694834   79869 cri.go:89] found id: ""
	I0829 19:39:28.694861   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.694868   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:28.694874   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:28.694934   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:28.728833   79869 cri.go:89] found id: ""
	I0829 19:39:28.728866   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.728878   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:28.728888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:28.728951   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:27.237871   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:29.735887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:30.931849   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:33.424593   79559 pod_ready.go:82] duration metric: took 4m0.000177098s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" ...
	E0829 19:39:33.424633   79559 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:39:33.424656   79559 pod_ready.go:39] duration metric: took 4m10.047294609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:33.424687   79559 kubeadm.go:597] duration metric: took 4m17.474785939s to restartPrimaryControlPlane
	W0829 19:39:33.424745   79559 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:33.424773   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:28.762236   79869 cri.go:89] found id: ""
	I0829 19:39:28.762269   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.762279   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:28.762286   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:28.762352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:28.794534   79869 cri.go:89] found id: ""
	I0829 19:39:28.794570   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.794583   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:28.794590   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:28.794660   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:28.827193   79869 cri.go:89] found id: ""
	I0829 19:39:28.827222   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.827233   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:28.827244   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:28.827260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:28.878905   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:28.878936   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:28.891795   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:28.891826   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:28.966249   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:28.966278   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:28.966294   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:29.044383   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:29.044417   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.582383   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:31.595250   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:31.595333   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:31.628763   79869 cri.go:89] found id: ""
	I0829 19:39:31.628791   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.628800   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:31.628805   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:31.628852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:31.663489   79869 cri.go:89] found id: ""
	I0829 19:39:31.663521   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.663531   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:31.663537   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:31.663598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:31.698248   79869 cri.go:89] found id: ""
	I0829 19:39:31.698275   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.698283   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:31.698289   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:31.698340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:31.732499   79869 cri.go:89] found id: ""
	I0829 19:39:31.732527   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.732536   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:31.732544   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:31.732601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:31.773831   79869 cri.go:89] found id: ""
	I0829 19:39:31.773853   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.773861   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:31.773866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:31.773909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:31.807713   79869 cri.go:89] found id: ""
	I0829 19:39:31.807739   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.807747   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:31.807753   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:31.807814   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:31.841846   79869 cri.go:89] found id: ""
	I0829 19:39:31.841874   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.841881   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:31.841887   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:31.841945   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:31.872713   79869 cri.go:89] found id: ""
	I0829 19:39:31.872736   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.872749   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:31.872760   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:31.872773   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:31.926299   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:31.926335   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:31.941134   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:31.941174   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:32.010600   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:32.010623   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:32.010638   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:32.091972   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:32.092008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.737021   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:34.236447   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:34.631695   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:34.644986   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:34.645051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:34.679788   79869 cri.go:89] found id: ""
	I0829 19:39:34.679816   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.679823   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:34.679832   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:34.679881   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:34.713113   79869 cri.go:89] found id: ""
	I0829 19:39:34.713139   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.713147   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:34.713152   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:34.713204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:34.745410   79869 cri.go:89] found id: ""
	I0829 19:39:34.745439   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.745451   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:34.745459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:34.745517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:34.779089   79869 cri.go:89] found id: ""
	I0829 19:39:34.779117   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.779125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:34.779132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:34.779179   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:34.810966   79869 cri.go:89] found id: ""
	I0829 19:39:34.810995   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.811004   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:34.811011   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:34.811075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:34.844859   79869 cri.go:89] found id: ""
	I0829 19:39:34.844894   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.844901   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:34.844907   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:34.844954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:34.876014   79869 cri.go:89] found id: ""
	I0829 19:39:34.876036   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.876044   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:34.876050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:34.876097   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:34.909383   79869 cri.go:89] found id: ""
	I0829 19:39:34.909412   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.909421   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:34.909429   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:34.909440   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:34.956841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:34.956875   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:34.969399   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:34.969423   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:35.034539   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:35.034574   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:35.034589   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:35.109702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:35.109743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:37.644897   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:37.658600   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:37.658665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:37.693604   79869 cri.go:89] found id: ""
	I0829 19:39:37.693638   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.693646   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:37.693655   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:37.693763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:37.727504   79869 cri.go:89] found id: ""
	I0829 19:39:37.727531   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.727538   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:37.727546   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:37.727598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:37.762755   79869 cri.go:89] found id: ""
	I0829 19:39:37.762778   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.762786   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:37.762792   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:37.762838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:37.799571   79869 cri.go:89] found id: ""
	I0829 19:39:37.799600   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.799611   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:37.799619   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:37.799669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:37.833599   79869 cri.go:89] found id: ""
	I0829 19:39:37.833632   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.833644   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:37.833651   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:37.833714   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:37.867877   79869 cri.go:89] found id: ""
	I0829 19:39:37.867901   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.867909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:37.867916   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:37.867968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:37.901439   79869 cri.go:89] found id: ""
	I0829 19:39:37.901467   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.901475   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:37.901480   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:37.901527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:37.936983   79869 cri.go:89] found id: ""
	I0829 19:39:37.937008   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.937016   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:37.937024   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:37.937035   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:38.016873   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:38.016917   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:38.052565   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:38.052605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:38.102174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:38.102210   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:38.115273   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:38.115298   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:38.186012   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:36.736406   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:39.235941   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:42.401382   79073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.237529155s)
	I0829 19:39:42.401460   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:42.428754   79073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:42.441896   79073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:42.456122   79073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:42.456147   79073 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:42.456190   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:42.471887   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:42.471947   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:42.483709   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:42.493000   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:42.493070   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:42.511916   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:42.520829   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:42.520891   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:42.530567   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:42.540199   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:42.540252   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:42.550058   79073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:42.596809   79073 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:39:42.596966   79073 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:42.706623   79073 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:42.706766   79073 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:42.706931   79073 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:39:42.717740   79073 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:40.686558   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:40.699240   79869 kubeadm.go:597] duration metric: took 4m4.589527641s to restartPrimaryControlPlane
	W0829 19:39:40.699313   79869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:40.699343   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:42.719760   79073 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:42.719862   79073 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:42.719929   79073 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:42.720023   79073 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:42.720079   79073 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:42.720144   79073 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:42.720193   79073 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:42.720248   79073 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:42.720315   79073 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:42.720386   79073 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:42.720459   79073 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:42.720496   79073 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:42.720555   79073 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:42.827328   79073 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:43.276222   79073 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:39:43.445594   79073 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:43.554811   79073 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:43.788184   79073 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:43.788791   79073 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:43.791871   79073 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:43.794448   79073 out.go:235]   - Booting up control plane ...
	I0829 19:39:43.794600   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:43.794702   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:43.794800   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:43.813894   79073 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:43.822272   79073 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:43.822357   79073 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:44.450706   79869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.75133723s)
	I0829 19:39:44.450782   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:44.464692   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:44.473894   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:44.483464   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:44.483483   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:44.483524   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:44.492228   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:44.492277   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:44.501349   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:44.510241   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:44.510295   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:44.519210   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.528256   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:44.528314   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.537658   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:44.546976   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:44.547027   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:44.556823   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:44.630397   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:39:44.630474   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:44.771729   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:44.771869   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:44.772018   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:39:44.944512   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:41.236034   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:43.236446   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:45.237605   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:44.947210   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:44.947320   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:44.947422   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:44.947540   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:44.947658   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:44.947781   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:44.947881   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:44.950819   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:44.950926   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:44.951022   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:44.951125   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:44.951174   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:44.951244   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:45.171698   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:45.287539   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:45.443576   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:45.594891   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:45.609143   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:45.610374   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:45.610440   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:45.746839   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:45.748753   79869 out.go:235]   - Booting up control plane ...
	I0829 19:39:45.748882   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:45.753577   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:45.754588   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:45.755463   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:45.760295   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:39:43.950283   79073 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:39:43.950458   79073 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:39:44.452956   79073 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.82915ms
	I0829 19:39:44.453068   79073 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:39:49.455000   79073 kubeadm.go:310] [api-check] The API server is healthy after 5.001789194s
	I0829 19:39:49.473145   79073 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:39:49.496760   79073 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:39:49.530950   79073 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:39:49.531148   79073 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-920571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:39:49.548546   79073 kubeadm.go:310] [bootstrap-token] Using token: bc4428.p8e3szrujohqnvnh
	I0829 19:39:47.735610   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:49.735833   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:49.549992   79073 out.go:235]   - Configuring RBAC rules ...
	I0829 19:39:49.550151   79073 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:39:49.558070   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:39:49.573758   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:39:49.579988   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:39:49.585250   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:39:49.592477   79073 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:39:49.863168   79073 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:39:50.294056   79073 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:39:50.862652   79073 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:39:50.863644   79073 kubeadm.go:310] 
	I0829 19:39:50.863717   79073 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:39:50.863729   79073 kubeadm.go:310] 
	I0829 19:39:50.863861   79073 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:39:50.863881   79073 kubeadm.go:310] 
	I0829 19:39:50.863917   79073 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:39:50.864019   79073 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:39:50.864101   79073 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:39:50.864111   79073 kubeadm.go:310] 
	I0829 19:39:50.864215   79073 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:39:50.864225   79073 kubeadm.go:310] 
	I0829 19:39:50.864298   79073 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:39:50.864312   79073 kubeadm.go:310] 
	I0829 19:39:50.864398   79073 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:39:50.864517   79073 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:39:50.864617   79073 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:39:50.864631   79073 kubeadm.go:310] 
	I0829 19:39:50.864743   79073 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:39:50.864856   79073 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:39:50.864869   79073 kubeadm.go:310] 
	I0829 19:39:50.864983   79073 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bc4428.p8e3szrujohqnvnh \
	I0829 19:39:50.865110   79073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:39:50.865142   79073 kubeadm.go:310] 	--control-plane 
	I0829 19:39:50.865152   79073 kubeadm.go:310] 
	I0829 19:39:50.865262   79073 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:39:50.865270   79073 kubeadm.go:310] 
	I0829 19:39:50.865370   79073 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bc4428.p8e3szrujohqnvnh \
	I0829 19:39:50.865527   79073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:39:50.866485   79073 kubeadm.go:310] W0829 19:39:42.565022    2509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:39:50.866852   79073 kubeadm.go:310] W0829 19:39:42.566073    2509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:39:50.866979   79073 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:39:50.867009   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:39:50.867020   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:39:50.868683   79073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:39:50.869952   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:39:50.880385   79073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:39:50.900028   79073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:39:50.900152   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:50.900187   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-920571 minikube.k8s.io/updated_at=2024_08_29T19_39_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=embed-certs-920571 minikube.k8s.io/primary=true
	I0829 19:39:51.090710   79073 ops.go:34] apiserver oom_adj: -16
	I0829 19:39:51.090865   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:51.591720   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:52.091579   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:52.591872   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:53.091671   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:53.591191   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.091640   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.591356   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.674005   79073 kubeadm.go:1113] duration metric: took 3.773916232s to wait for elevateKubeSystemPrivileges
	I0829 19:39:54.674046   79073 kubeadm.go:394] duration metric: took 4m58.910639816s to StartCluster
	I0829 19:39:54.674070   79073 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:39:54.674178   79073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:39:54.675789   79073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:39:54.676038   79073 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:39:54.676095   79073 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:39:54.676184   79073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-920571"
	I0829 19:39:54.676210   79073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-920571"
	I0829 19:39:54.676222   79073 addons.go:69] Setting metrics-server=true in profile "embed-certs-920571"
	I0829 19:39:54.676225   79073 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:39:54.676241   79073 addons.go:234] Setting addon metrics-server=true in "embed-certs-920571"
	I0829 19:39:54.676264   79073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-920571"
	I0829 19:39:54.676216   79073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-920571"
	W0829 19:39:54.676329   79073 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:39:54.676360   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	W0829 19:39:54.676392   79073 addons.go:243] addon metrics-server should already be in state true
	I0829 19:39:54.676455   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	I0829 19:39:54.676650   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676664   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676682   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.676684   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.676824   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676859   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.677794   79073 out.go:177] * Verifying Kubernetes components...
	I0829 19:39:54.679112   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:39:54.694669   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0829 19:39:54.694717   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0829 19:39:54.695090   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.695420   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.695532   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35861
	I0829 19:39:54.695640   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.695656   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.695925   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.695948   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.695951   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.696038   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.696266   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.696373   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.696392   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.696443   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.696600   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.696629   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.696745   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.697378   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.697413   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.702955   79073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-920571"
	W0829 19:39:54.702978   79073 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:39:54.703003   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	I0829 19:39:54.703347   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.703377   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.714194   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0829 19:39:54.714526   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0829 19:39:54.714735   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.714916   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.715368   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.715369   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.715389   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.715401   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.715712   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.715713   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.715944   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.715943   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.717556   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.717758   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.718972   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39097
	I0829 19:39:54.719212   79073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:39:54.719303   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.719212   79073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:39:52.236231   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:54.238843   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:54.719723   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.719735   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.720033   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.720307   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:39:54.720322   79073 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:39:54.720342   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.720533   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.720559   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.720952   79073 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:39:54.720975   79073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:39:54.720992   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.723754   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724174   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.724198   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724516   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.724684   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.724820   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.724879   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724973   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.725426   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.725466   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.725687   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.725827   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.725982   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.726117   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.743443   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37853
	I0829 19:39:54.744025   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.744590   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.744618   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.744908   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.745030   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.746560   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.746809   79073 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:39:54.746819   79073 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:39:54.746831   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.749422   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.749802   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.749827   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.749904   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.750058   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.750206   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.750320   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.902922   79073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:39:54.921933   79073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-920571" to be "Ready" ...
	I0829 19:39:54.936483   79073 node_ready.go:49] node "embed-certs-920571" has status "Ready":"True"
	I0829 19:39:54.936513   79073 node_ready.go:38] duration metric: took 14.542582ms for node "embed-certs-920571" to be "Ready" ...
	I0829 19:39:54.936524   79073 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:54.945389   79073 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:55.076394   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:39:55.076421   79073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:39:55.089140   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:39:55.096473   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:39:55.128207   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:39:55.128235   79073 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:39:55.186402   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:39:55.186429   79073 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:39:55.262731   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:39:55.548177   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.548217   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.548521   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.548542   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:55.548555   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.548564   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.548824   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.548857   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Closing plugin on server side
	I0829 19:39:55.548872   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:55.555956   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.555971   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.556210   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.556227   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.020289   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.020317   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.020610   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.020632   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.020642   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.020650   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.020912   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.020931   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.369749   79073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.106975723s)
	I0829 19:39:56.369809   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.369825   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.370119   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.370143   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.370154   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.370168   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.370407   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.370428   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.370440   79073 addons.go:475] Verifying addon metrics-server=true in "embed-certs-920571"
	I0829 19:39:56.373030   79073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 19:39:56.374322   79073 addons.go:510] duration metric: took 1.698226444s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 19:39:56.460329   79073 pod_ready.go:93] pod "etcd-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:39:56.460362   79073 pod_ready.go:82] duration metric: took 1.51494335s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:56.460375   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:58.467017   79073 pod_ready.go:93] pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:39:58.467040   79073 pod_ready.go:82] duration metric: took 2.006657264s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:58.467050   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:59.826535   79559 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.4017346s)
	I0829 19:39:59.826609   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:59.849311   79559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:59.859855   79559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:59.874237   79559 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:59.874262   79559 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:59.874315   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 19:39:59.883719   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:59.883785   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:59.893307   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 19:39:59.902478   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:59.902519   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:59.912664   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 19:39:59.932387   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:59.932443   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:59.948358   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 19:39:59.965812   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:59.965867   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:59.975437   79559 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:40:00.022167   79559 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:40:00.022347   79559 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:40:00.126622   79559 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:40:00.126777   79559 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:40:00.126914   79559 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:40:00.135123   79559 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:56.736712   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:59.235639   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:00.137714   79559 out.go:235]   - Generating certificates and keys ...
	I0829 19:40:00.137806   79559 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:40:00.137875   79559 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:40:00.138003   79559 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:40:00.138114   79559 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:40:00.138184   79559 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:40:00.138240   79559 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:40:00.138297   79559 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:40:00.138351   79559 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:40:00.138443   79559 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:40:00.138555   79559 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:40:00.138607   79559 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:40:00.138682   79559 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:40:00.368674   79559 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:40:00.454426   79559 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:40:00.576835   79559 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:40:00.650342   79559 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:40:01.038392   79559 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:40:01.038806   79559 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:40:01.041297   79559 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:40:01.043020   79559 out.go:235]   - Booting up control plane ...
	I0829 19:40:01.043127   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:40:01.043224   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:40:01.043501   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:40:01.062342   79559 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:40:01.068185   79559 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:40:01.068247   79559 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:40:01.202906   79559 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:40:01.203076   79559 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:40:01.705241   79559 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.336154ms
	I0829 19:40:01.705368   79559 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:40:00.476336   79073 pod_ready.go:103] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:02.973188   79073 pod_ready.go:103] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:03.473576   79073 pod_ready.go:93] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.473607   79073 pod_ready.go:82] duration metric: took 5.006550689s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.473616   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25cmq" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.478026   79073 pod_ready.go:93] pod "kube-proxy-25cmq" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.478045   79073 pod_ready.go:82] duration metric: took 4.423884ms for pod "kube-proxy-25cmq" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.478054   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.482541   79073 pod_ready.go:93] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.482560   79073 pod_ready.go:82] duration metric: took 4.499742ms for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.482566   79073 pod_ready.go:39] duration metric: took 8.54603076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:03.482581   79073 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:40:03.482623   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:40:03.502670   79073 api_server.go:72] duration metric: took 8.826595134s to wait for apiserver process to appear ...
	I0829 19:40:03.502695   79073 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:40:03.502718   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:40:03.507953   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0829 19:40:03.508948   79073 api_server.go:141] control plane version: v1.31.0
	I0829 19:40:03.508968   79073 api_server.go:131] duration metric: took 6.265433ms to wait for apiserver health ...
	I0829 19:40:03.508977   79073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:40:03.514929   79073 system_pods.go:59] 9 kube-system pods found
	I0829 19:40:03.514962   79073 system_pods.go:61] "coredns-6f6b679f8f-8qrn6" [af312704-4ea9-432d-85b2-67c59231187f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:03.514971   79073 system_pods.go:61] "coredns-6f6b679f8f-9f75n" [80f3b51d-fced-4cd9-8c43-2a4eea28e470] Running
	I0829 19:40:03.514979   79073 system_pods.go:61] "etcd-embed-certs-920571" [47af52f8-1d18-41fd-b013-e53fe813e4cf] Running
	I0829 19:40:03.514987   79073 system_pods.go:61] "kube-apiserver-embed-certs-920571" [1e9c3d77-55e1-4998-a2af-430254fde431] Running
	I0829 19:40:03.514994   79073 system_pods.go:61] "kube-controller-manager-embed-certs-920571" [08b33c9f-5858-41b7-a190-f34b611203ee] Running
	I0829 19:40:03.515000   79073 system_pods.go:61] "kube-proxy-25cmq" [35ecfe58-b448-4db0-b4cc-434422ec4ca6] Running
	I0829 19:40:03.515009   79073 system_pods.go:61] "kube-scheduler-embed-certs-920571" [ea37ea4f-390d-41d1-83b2-72fe1a09302b] Running
	I0829 19:40:03.515018   79073 system_pods.go:61] "metrics-server-6867b74b74-kb2c6" [8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:03.515027   79073 system_pods.go:61] "storage-provisioner" [741481e5-8e38-4522-a9df-4b36e6d5cf9c] Running
	I0829 19:40:03.515036   79073 system_pods.go:74] duration metric: took 6.052187ms to wait for pod list to return data ...
	I0829 19:40:03.515046   79073 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:40:03.518040   79073 default_sa.go:45] found service account: "default"
	I0829 19:40:03.518060   79073 default_sa.go:55] duration metric: took 3.004653ms for default service account to be created ...
	I0829 19:40:03.518069   79073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:40:03.523915   79073 system_pods.go:86] 9 kube-system pods found
	I0829 19:40:03.523942   79073 system_pods.go:89] "coredns-6f6b679f8f-8qrn6" [af312704-4ea9-432d-85b2-67c59231187f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:03.523949   79073 system_pods.go:89] "coredns-6f6b679f8f-9f75n" [80f3b51d-fced-4cd9-8c43-2a4eea28e470] Running
	I0829 19:40:03.523954   79073 system_pods.go:89] "etcd-embed-certs-920571" [47af52f8-1d18-41fd-b013-e53fe813e4cf] Running
	I0829 19:40:03.523958   79073 system_pods.go:89] "kube-apiserver-embed-certs-920571" [1e9c3d77-55e1-4998-a2af-430254fde431] Running
	I0829 19:40:03.523962   79073 system_pods.go:89] "kube-controller-manager-embed-certs-920571" [08b33c9f-5858-41b7-a190-f34b611203ee] Running
	I0829 19:40:03.523965   79073 system_pods.go:89] "kube-proxy-25cmq" [35ecfe58-b448-4db0-b4cc-434422ec4ca6] Running
	I0829 19:40:03.523968   79073 system_pods.go:89] "kube-scheduler-embed-certs-920571" [ea37ea4f-390d-41d1-83b2-72fe1a09302b] Running
	I0829 19:40:03.523973   79073 system_pods.go:89] "metrics-server-6867b74b74-kb2c6" [8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:03.523978   79073 system_pods.go:89] "storage-provisioner" [741481e5-8e38-4522-a9df-4b36e6d5cf9c] Running
	I0829 19:40:03.523986   79073 system_pods.go:126] duration metric: took 5.911567ms to wait for k8s-apps to be running ...
	I0829 19:40:03.523997   79073 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:40:03.524049   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:03.541502   79073 system_svc.go:56] duration metric: took 17.4955ms WaitForService to wait for kubelet
	I0829 19:40:03.541538   79073 kubeadm.go:582] duration metric: took 8.865466463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:40:03.541564   79073 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:40:03.544700   79073 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:40:03.544728   79073 node_conditions.go:123] node cpu capacity is 2
	I0829 19:40:03.544744   79073 node_conditions.go:105] duration metric: took 3.172559ms to run NodePressure ...
	I0829 19:40:03.544758   79073 start.go:241] waiting for startup goroutines ...
	I0829 19:40:03.544771   79073 start.go:246] waiting for cluster config update ...
	I0829 19:40:03.544789   79073 start.go:255] writing updated cluster config ...
	I0829 19:40:03.545136   79073 ssh_runner.go:195] Run: rm -f paused
	I0829 19:40:03.609413   79073 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:40:03.611490   79073 out.go:177] * Done! kubectl is now configured to use "embed-certs-920571" cluster and "default" namespace by default
	I0829 19:40:01.236210   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:03.236420   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:05.237141   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:06.707891   79559 kubeadm.go:310] [api-check] The API server is healthy after 5.002523987s
	I0829 19:40:06.719470   79559 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:40:06.733886   79559 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:40:06.759672   79559 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:40:06.759933   79559 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-672127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:40:06.771514   79559 kubeadm.go:310] [bootstrap-token] Using token: fzav4x.eeztheucmrep51py
	I0829 19:40:06.772887   79559 out.go:235]   - Configuring RBAC rules ...
	I0829 19:40:06.773014   79559 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:40:06.778644   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:40:06.792388   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:40:06.798560   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:40:06.801930   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:40:06.805767   79559 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:40:07.119680   79559 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:40:07.536660   79559 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:40:08.115528   79559 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:40:08.115550   79559 kubeadm.go:310] 
	I0829 19:40:08.115621   79559 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:40:08.115657   79559 kubeadm.go:310] 
	I0829 19:40:08.115780   79559 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:40:08.115802   79559 kubeadm.go:310] 
	I0829 19:40:08.115843   79559 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:40:08.115929   79559 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:40:08.116002   79559 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:40:08.116011   79559 kubeadm.go:310] 
	I0829 19:40:08.116087   79559 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:40:08.116099   79559 kubeadm.go:310] 
	I0829 19:40:08.116154   79559 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:40:08.116173   79559 kubeadm.go:310] 
	I0829 19:40:08.116247   79559 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:40:08.116386   79559 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:40:08.116477   79559 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:40:08.116487   79559 kubeadm.go:310] 
	I0829 19:40:08.116599   79559 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:40:08.116705   79559 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:40:08.116712   79559 kubeadm.go:310] 
	I0829 19:40:08.116779   79559 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token fzav4x.eeztheucmrep51py \
	I0829 19:40:08.116879   79559 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:40:08.116931   79559 kubeadm.go:310] 	--control-plane 
	I0829 19:40:08.116947   79559 kubeadm.go:310] 
	I0829 19:40:08.117048   79559 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:40:08.117058   79559 kubeadm.go:310] 
	I0829 19:40:08.117154   79559 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token fzav4x.eeztheucmrep51py \
	I0829 19:40:08.117270   79559 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:40:08.118512   79559 kubeadm.go:310] W0829 19:39:59.991394    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:08.118870   79559 kubeadm.go:310] W0829 19:39:59.992249    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:08.118981   79559 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:40:08.119009   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:40:08.119019   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:40:08.120832   79559 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:40:08.122029   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:40:08.133326   79559 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:40:08.150808   79559 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:40:08.150867   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:08.150884   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-672127 minikube.k8s.io/updated_at=2024_08_29T19_40_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=default-k8s-diff-port-672127 minikube.k8s.io/primary=true
	I0829 19:40:08.170047   79559 ops.go:34] apiserver oom_adj: -16
	I0829 19:40:08.350103   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:07.736119   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:10.236910   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:08.850762   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:09.350244   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:09.850222   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:10.350462   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:10.850237   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:11.350179   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:11.851033   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:12.351069   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:12.442963   79559 kubeadm.go:1113] duration metric: took 4.29215456s to wait for elevateKubeSystemPrivileges
	I0829 19:40:12.442998   79559 kubeadm.go:394] duration metric: took 4m56.544013459s to StartCluster
	I0829 19:40:12.443020   79559 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:40:12.443110   79559 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:40:12.444757   79559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:40:12.444998   79559 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:40:12.445061   79559 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:40:12.445138   79559 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445151   79559 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445173   79559 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.445181   79559 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:40:12.445179   79559 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-672127"
	I0829 19:40:12.445210   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.445210   79559 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:40:12.445266   79559 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445313   79559 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.445323   79559 addons.go:243] addon metrics-server should already be in state true
	I0829 19:40:12.445347   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.445625   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445658   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.445662   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445683   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.445737   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445775   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.446414   79559 out.go:177] * Verifying Kubernetes components...
	I0829 19:40:12.447652   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:40:12.461386   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0829 19:40:12.461436   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I0829 19:40:12.461805   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.461831   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.462057   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I0829 19:40:12.462324   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462327   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462341   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462347   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462373   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.462701   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.462798   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.462807   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462817   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462886   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.463109   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.463360   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.463392   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.463586   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.463607   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.465961   79559 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.465971   79559 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:40:12.465991   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.466309   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.466355   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.480989   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43705
	I0829 19:40:12.481216   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44477
	I0829 19:40:12.481407   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.481639   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.481843   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.481858   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.482222   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.482249   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.482291   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.482440   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.482576   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.482745   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.484681   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.485336   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.485664   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0829 19:40:12.486377   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.486547   79559 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:40:12.486922   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.486945   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.487310   79559 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:40:12.487586   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.488042   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:40:12.488060   79559 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:40:12.488081   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.488266   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.488307   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.488874   79559 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:40:12.488897   79559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:40:12.488914   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.492291   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.492699   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.492814   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.492844   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.493059   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.493128   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.493144   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.493259   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.493300   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.493432   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.493471   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.493822   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.493972   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.494114   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.505220   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0829 19:40:12.505690   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.506337   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.506363   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.506727   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.506899   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.508602   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.508796   79559 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:40:12.508810   79559 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:40:12.508829   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.511310   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.511660   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.511691   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.511815   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.511969   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.512110   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.512253   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.642279   79559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:40:12.666598   79559 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-672127" to be "Ready" ...
	I0829 19:40:12.682873   79559 node_ready.go:49] node "default-k8s-diff-port-672127" has status "Ready":"True"
	I0829 19:40:12.682895   79559 node_ready.go:38] duration metric: took 16.267143ms for node "default-k8s-diff-port-672127" to be "Ready" ...
	I0829 19:40:12.682904   79559 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:12.693451   79559 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:12.736525   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:40:12.736548   79559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:40:12.754764   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:40:12.754786   79559 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:40:12.806826   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:40:12.806856   79559 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:40:12.817164   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:40:12.837896   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:40:12.903140   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:40:14.124266   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.307063383s)
	I0829 19:40:14.124305   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.286373382s)
	I0829 19:40:14.124324   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124343   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124368   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124430   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.221258684s)
	I0829 19:40:14.124473   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124487   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124635   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124649   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124659   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124667   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124794   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.124813   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.124831   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124848   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124856   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124873   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124864   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124882   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124896   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124902   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124913   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124935   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.125356   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.125359   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.125381   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.126568   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.126637   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.126656   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.126704   79559 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-672127"
	I0829 19:40:14.193216   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.193238   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.193544   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.193562   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.195467   79559 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0829 19:40:12.237641   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:14.736679   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:14.196698   79559 addons.go:510] duration metric: took 1.751639165s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0829 19:40:14.720042   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:17.199482   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:17.235908   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.735901   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.199705   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.699776   79559 pod_ready.go:93] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:19.699801   79559 pod_ready.go:82] duration metric: took 7.006327617s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.699810   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.704240   79559 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:19.704261   79559 pod_ready.go:82] duration metric: took 4.444744ms for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.704269   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.710740   79559 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.710761   79559 pod_ready.go:82] duration metric: took 2.006486043s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.710770   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nqbn4" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.715111   79559 pod_ready.go:93] pod "kube-proxy-nqbn4" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.715134   79559 pod_ready.go:82] duration metric: took 4.357535ms for pod "kube-proxy-nqbn4" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.715146   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.719192   79559 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.719207   79559 pod_ready.go:82] duration metric: took 4.054087ms for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.719222   79559 pod_ready.go:39] duration metric: took 9.036299009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:21.719234   79559 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:40:21.719289   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:40:21.734507   79559 api_server.go:72] duration metric: took 9.289477227s to wait for apiserver process to appear ...
	I0829 19:40:21.734531   79559 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:40:21.734555   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:40:21.739963   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 200:
	ok
	I0829 19:40:21.740847   79559 api_server.go:141] control plane version: v1.31.0
	I0829 19:40:21.740865   79559 api_server.go:131] duration metric: took 6.327694ms to wait for apiserver health ...
	I0829 19:40:21.740872   79559 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:40:21.747609   79559 system_pods.go:59] 9 kube-system pods found
	I0829 19:40:21.747636   79559 system_pods.go:61] "coredns-6f6b679f8f-5p2vn" [8f7749c2-4cb3-4372-8144-46109f9b89b7] Running
	I0829 19:40:21.747643   79559 system_pods.go:61] "coredns-6f6b679f8f-dxbt5" [84373054-e72e-469a-bf2f-101943117851] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:21.747648   79559 system_pods.go:61] "etcd-default-k8s-diff-port-672127" [aacacabd-96ed-4635-b550-0bff36ec6c36] Running
	I0829 19:40:21.747654   79559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-672127" [6cf0f710-c339-438e-a572-2e8c498fa63a] Running
	I0829 19:40:21.747659   79559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-672127" [aaa934bc-c350-4318-9246-a2bb4b7d77f1] Running
	I0829 19:40:21.747662   79559 system_pods.go:61] "kube-proxy-nqbn4" [c5b48a1f-725b-45b7-8a3f-0df0f3371d2f] Running
	I0829 19:40:21.747665   79559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-672127" [fcf1ef19-acf4-41f2-a288-cfa8f978961b] Running
	I0829 19:40:21.747670   79559 system_pods.go:61] "metrics-server-6867b74b74-4p8qr" [8026c5c8-9f02-45a1-8cc8-9d485dc49cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:21.747674   79559 system_pods.go:61] "storage-provisioner" [5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00] Running
	I0829 19:40:21.747680   79559 system_pods.go:74] duration metric: took 6.803459ms to wait for pod list to return data ...
	I0829 19:40:21.747689   79559 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:40:21.750153   79559 default_sa.go:45] found service account: "default"
	I0829 19:40:21.750168   79559 default_sa.go:55] duration metric: took 2.474593ms for default service account to be created ...
	I0829 19:40:21.750175   79559 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:40:21.901186   79559 system_pods.go:86] 9 kube-system pods found
	I0829 19:40:21.901213   79559 system_pods.go:89] "coredns-6f6b679f8f-5p2vn" [8f7749c2-4cb3-4372-8144-46109f9b89b7] Running
	I0829 19:40:21.901219   79559 system_pods.go:89] "coredns-6f6b679f8f-dxbt5" [84373054-e72e-469a-bf2f-101943117851] Running
	I0829 19:40:21.901222   79559 system_pods.go:89] "etcd-default-k8s-diff-port-672127" [aacacabd-96ed-4635-b550-0bff36ec6c36] Running
	I0829 19:40:21.901227   79559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-672127" [6cf0f710-c339-438e-a572-2e8c498fa63a] Running
	I0829 19:40:21.901231   79559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-672127" [aaa934bc-c350-4318-9246-a2bb4b7d77f1] Running
	I0829 19:40:21.901235   79559 system_pods.go:89] "kube-proxy-nqbn4" [c5b48a1f-725b-45b7-8a3f-0df0f3371d2f] Running
	I0829 19:40:21.901238   79559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-672127" [fcf1ef19-acf4-41f2-a288-cfa8f978961b] Running
	I0829 19:40:21.901245   79559 system_pods.go:89] "metrics-server-6867b74b74-4p8qr" [8026c5c8-9f02-45a1-8cc8-9d485dc49cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:21.901249   79559 system_pods.go:89] "storage-provisioner" [5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00] Running
	I0829 19:40:21.901257   79559 system_pods.go:126] duration metric: took 151.07798ms to wait for k8s-apps to be running ...
	I0829 19:40:21.901263   79559 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:40:21.901306   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:21.916730   79559 system_svc.go:56] duration metric: took 15.457902ms WaitForService to wait for kubelet
	I0829 19:40:21.916757   79559 kubeadm.go:582] duration metric: took 9.471732105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:40:21.916773   79559 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:40:22.099083   79559 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:40:22.099119   79559 node_conditions.go:123] node cpu capacity is 2
	I0829 19:40:22.099133   79559 node_conditions.go:105] duration metric: took 182.354927ms to run NodePressure ...
	I0829 19:40:22.099147   79559 start.go:241] waiting for startup goroutines ...
	I0829 19:40:22.099156   79559 start.go:246] waiting for cluster config update ...
	I0829 19:40:22.099168   79559 start.go:255] writing updated cluster config ...
	I0829 19:40:22.099536   79559 ssh_runner.go:195] Run: rm -f paused
	I0829 19:40:22.148307   79559 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:40:22.150361   79559 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-672127" cluster and "default" namespace by default
	I0829 19:40:21.736121   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:23.229905   78865 pod_ready.go:82] duration metric: took 4m0.000141946s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" ...
	E0829 19:40:23.229943   78865 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:40:23.229991   78865 pod_ready.go:39] duration metric: took 4m10.70989222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:23.230021   78865 kubeadm.go:597] duration metric: took 4m18.600330645s to restartPrimaryControlPlane
	W0829 19:40:23.230078   78865 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:40:23.230136   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:40:25.762989   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:40:25.763689   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:25.763863   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:30.764613   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:30.764821   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:40.765517   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:40.765752   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:49.374221   78865 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.144057875s)
	I0829 19:40:49.374297   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:49.389586   78865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:40:49.399146   78865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:40:49.408450   78865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:40:49.408469   78865 kubeadm.go:157] found existing configuration files:
	
	I0829 19:40:49.408521   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:40:49.417651   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:40:49.417706   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:40:49.427073   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:40:49.435307   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:40:49.435356   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:40:49.443720   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:40:49.452437   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:40:49.452493   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:40:49.461133   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:40:49.469515   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:40:49.469564   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:40:49.478224   78865 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:40:49.523193   78865 kubeadm.go:310] W0829 19:40:49.504457    3026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:49.523801   78865 kubeadm.go:310] W0829 19:40:49.505165    3026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:49.640221   78865 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:40:57.429227   78865 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:40:57.429293   78865 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:40:57.429396   78865 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:40:57.429536   78865 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:40:57.429665   78865 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:40:57.429757   78865 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:40:57.431358   78865 out.go:235]   - Generating certificates and keys ...
	I0829 19:40:57.431434   78865 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:40:57.431485   78865 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:40:57.431566   78865 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:40:57.431640   78865 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:40:57.431711   78865 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:40:57.431786   78865 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:40:57.431847   78865 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:40:57.431893   78865 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:40:57.431956   78865 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:40:57.432013   78865 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:40:57.432052   78865 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:40:57.432109   78865 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:40:57.432186   78865 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:40:57.432275   78865 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:40:57.432352   78865 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:40:57.432444   78865 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:40:57.432518   78865 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:40:57.432595   78865 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:40:57.432648   78865 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:40:57.434057   78865 out.go:235]   - Booting up control plane ...
	I0829 19:40:57.434161   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:40:57.434245   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:40:57.434298   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:40:57.434396   78865 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:40:57.434475   78865 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:40:57.434509   78865 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:40:57.434687   78865 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:40:57.434772   78865 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:40:57.434824   78865 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 509.075612ms
	I0829 19:40:57.434887   78865 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:40:57.434932   78865 kubeadm.go:310] [api-check] The API server is healthy after 5.002117161s
	I0829 19:40:57.435094   78865 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:40:57.435232   78865 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:40:57.435284   78865 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:40:57.435429   78865 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-690795 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:40:57.435472   78865 kubeadm.go:310] [bootstrap-token] Using token: adxyev.rcmf9k5ok190h0g1
	I0829 19:40:57.436846   78865 out.go:235]   - Configuring RBAC rules ...
	I0829 19:40:57.436936   78865 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:40:57.437001   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:40:57.437113   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:40:57.437214   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:40:57.437307   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:40:57.437380   78865 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:40:57.437480   78865 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:40:57.437528   78865 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:40:57.437577   78865 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:40:57.437583   78865 kubeadm.go:310] 
	I0829 19:40:57.437635   78865 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:40:57.437641   78865 kubeadm.go:310] 
	I0829 19:40:57.437704   78865 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:40:57.437710   78865 kubeadm.go:310] 
	I0829 19:40:57.437744   78865 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:40:57.437807   78865 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:40:57.437851   78865 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:40:57.437857   78865 kubeadm.go:310] 
	I0829 19:40:57.437907   78865 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:40:57.437913   78865 kubeadm.go:310] 
	I0829 19:40:57.437951   78865 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:40:57.437957   78865 kubeadm.go:310] 
	I0829 19:40:57.438000   78865 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:40:57.438107   78865 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:40:57.438188   78865 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:40:57.438200   78865 kubeadm.go:310] 
	I0829 19:40:57.438289   78865 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:40:57.438359   78865 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:40:57.438364   78865 kubeadm.go:310] 
	I0829 19:40:57.438429   78865 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token adxyev.rcmf9k5ok190h0g1 \
	I0829 19:40:57.438507   78865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:40:57.438525   78865 kubeadm.go:310] 	--control-plane 
	I0829 19:40:57.438534   78865 kubeadm.go:310] 
	I0829 19:40:57.438611   78865 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:40:57.438621   78865 kubeadm.go:310] 
	I0829 19:40:57.438688   78865 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token adxyev.rcmf9k5ok190h0g1 \
	I0829 19:40:57.438791   78865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:40:57.438814   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:40:57.438825   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:40:57.440836   78865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:40:57.442065   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:40:57.452700   78865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:40:57.469549   78865 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:40:57.469621   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:57.469656   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-690795 minikube.k8s.io/updated_at=2024_08_29T19_40_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=no-preload-690795 minikube.k8s.io/primary=true
	I0829 19:40:57.503411   78865 ops.go:34] apiserver oom_adj: -16
	I0829 19:40:57.648807   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:58.149067   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:58.649770   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:59.148932   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:59.649114   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:00.149833   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:00.649474   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.149795   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.649154   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.745084   78865 kubeadm.go:1113] duration metric: took 4.275525047s to wait for elevateKubeSystemPrivileges
	I0829 19:41:01.745117   78865 kubeadm.go:394] duration metric: took 4m57.169926854s to StartCluster
	I0829 19:41:01.745134   78865 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:41:01.745209   78865 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:41:01.746775   78865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:41:01.747005   78865 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:41:01.747062   78865 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:41:01.747155   78865 addons.go:69] Setting storage-provisioner=true in profile "no-preload-690795"
	I0829 19:41:01.747175   78865 addons.go:69] Setting default-storageclass=true in profile "no-preload-690795"
	I0829 19:41:01.747189   78865 addons.go:234] Setting addon storage-provisioner=true in "no-preload-690795"
	W0829 19:41:01.747199   78865 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:41:01.747200   78865 addons.go:69] Setting metrics-server=true in profile "no-preload-690795"
	I0829 19:41:01.747240   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.747246   78865 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:41:01.747243   78865 addons.go:234] Setting addon metrics-server=true in "no-preload-690795"
	W0829 19:41:01.747307   78865 addons.go:243] addon metrics-server should already be in state true
	I0829 19:41:01.747333   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.747206   78865 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-690795"
	I0829 19:41:01.747652   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747670   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747678   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.747702   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.747780   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747810   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.748790   78865 out.go:177] * Verifying Kubernetes components...
	I0829 19:41:01.750069   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:41:01.764006   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0829 19:41:01.765511   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.766194   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.766218   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.766287   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43301
	I0829 19:41:01.766670   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.766694   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.766912   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.766965   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I0829 19:41:01.767129   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.767149   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.767304   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.767506   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.767737   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.767755   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.768073   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.768202   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.768241   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.768615   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.768646   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.771065   78865 addons.go:234] Setting addon default-storageclass=true in "no-preload-690795"
	W0829 19:41:01.771088   78865 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:41:01.771117   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.771415   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.771441   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.787271   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I0829 19:41:01.788003   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.788577   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.788606   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.788885   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0829 19:41:01.789065   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0829 19:41:01.789073   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.789361   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.789716   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.789754   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.789754   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.789774   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.790084   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.790243   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.790319   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.791018   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.791029   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.791393   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.791721   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.792306   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.793557   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.793806   78865 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:41:01.794942   78865 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:41:01.795033   78865 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:41:01.795049   78865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:41:01.795067   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.796032   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:41:01.796048   78865 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:41:01.796065   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.799646   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800163   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800618   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.800644   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800826   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.800843   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800941   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.801043   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.801114   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.801184   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.801239   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.801363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.801367   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.801484   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.807187   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36385
	I0829 19:41:01.807604   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.808056   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.808070   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.808471   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.808671   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.810374   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.810569   78865 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:41:01.810579   78865 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:41:01.810591   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.813314   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.813766   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.813776   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.814029   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.814187   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.814292   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.814379   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.963011   78865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:41:01.981935   78865 node_ready.go:35] waiting up to 6m0s for node "no-preload-690795" to be "Ready" ...
	I0829 19:41:01.998366   78865 node_ready.go:49] node "no-preload-690795" has status "Ready":"True"
	I0829 19:41:01.998389   78865 node_ready.go:38] duration metric: took 16.418591ms for node "no-preload-690795" to be "Ready" ...
	I0829 19:41:01.998398   78865 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:41:02.005811   78865 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:02.053495   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:41:02.197657   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:41:02.239853   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:41:02.239877   78865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:41:02.270764   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:41:02.270789   78865 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:41:02.327819   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:41:02.327853   78865 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:41:02.380812   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.380843   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.381117   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.381191   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:02.381209   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.381217   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.381432   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.381444   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:02.384211   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:41:02.387013   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.387027   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.387286   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:02.387333   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.387345   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.027502   78865 pod_ready.go:93] pod "etcd-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:03.027535   78865 pod_ready.go:82] duration metric: took 1.02170157s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:03.027550   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:03.410428   78865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212715771s)
	I0829 19:41:03.410485   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.410503   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.412586   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.412590   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.412614   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.412625   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.412632   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.412926   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.412947   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.412954   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.587379   78865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.203116606s)
	I0829 19:41:03.587437   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.587452   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.587770   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.587840   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.587859   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.587874   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.587878   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.588185   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.588206   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.588218   78865 addons.go:475] Verifying addon metrics-server=true in "no-preload-690795"
	I0829 19:41:03.588192   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.590131   78865 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 19:41:00.767158   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:00.767429   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:03.591280   78865 addons.go:510] duration metric: took 1.844219817s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 19:41:05.035315   78865 pod_ready.go:103] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"False"
	I0829 19:41:06.033037   78865 pod_ready.go:93] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:06.033060   78865 pod_ready.go:82] duration metric: took 3.005501862s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:06.033068   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.039035   78865 pod_ready.go:93] pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.039059   78865 pod_ready.go:82] duration metric: took 1.005984859s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.039068   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p7zvh" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.043096   78865 pod_ready.go:93] pod "kube-proxy-p7zvh" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.043116   78865 pod_ready.go:82] duration metric: took 4.042896ms for pod "kube-proxy-p7zvh" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.043125   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.046934   78865 pod_ready.go:93] pod "kube-scheduler-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.046957   78865 pod_ready.go:82] duration metric: took 3.826283ms for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.046966   78865 pod_ready.go:39] duration metric: took 5.048560252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:41:07.046983   78865 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:41:07.047036   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:41:07.062234   78865 api_server.go:72] duration metric: took 5.315200823s to wait for apiserver process to appear ...
	I0829 19:41:07.062256   78865 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:41:07.062277   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:41:07.068022   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0829 19:41:07.069170   78865 api_server.go:141] control plane version: v1.31.0
	I0829 19:41:07.069190   78865 api_server.go:131] duration metric: took 6.927858ms to wait for apiserver health ...
	I0829 19:41:07.069198   78865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:41:07.075909   78865 system_pods.go:59] 9 kube-system pods found
	I0829 19:41:07.075932   78865 system_pods.go:61] "coredns-6f6b679f8f-wr7bq" [ac054ab5-3a0e-433e-add6-5817ce6f1c27] Running
	I0829 19:41:07.075939   78865 system_pods.go:61] "coredns-6f6b679f8f-xbfb6" [a94d281f-1fdb-4e33-a060-17cd5981462c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:41:07.075944   78865 system_pods.go:61] "etcd-no-preload-690795" [a13c0d5b-fb99-4c57-8401-b0580a0e97a7] Running
	I0829 19:41:07.075949   78865 system_pods.go:61] "kube-apiserver-no-preload-690795" [1ec66a25-5b6d-4b81-8c44-b4dffad1ec12] Running
	I0829 19:41:07.075953   78865 system_pods.go:61] "kube-controller-manager-no-preload-690795" [d670e6eb-006f-4fce-a72f-f266fda72ccc] Running
	I0829 19:41:07.075956   78865 system_pods.go:61] "kube-proxy-p7zvh" [14f4576d-3d3e-4848-9350-3348293318aa] Running
	I0829 19:41:07.075960   78865 system_pods.go:61] "kube-scheduler-no-preload-690795" [1b4dfa7f-b043-4a38-a250-2e51eabf1b33] Running
	I0829 19:41:07.075964   78865 system_pods.go:61] "metrics-server-6867b74b74-shs88" [cd53f408-7f8a-40ae-93f3-7a00c8ae6646] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:41:07.075968   78865 system_pods.go:61] "storage-provisioner" [df10c563-06d8-48f8-a6e4-35837195a25d] Running
	I0829 19:41:07.075975   78865 system_pods.go:74] duration metric: took 6.771333ms to wait for pod list to return data ...
	I0829 19:41:07.075985   78865 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:41:07.079235   78865 default_sa.go:45] found service account: "default"
	I0829 19:41:07.079255   78865 default_sa.go:55] duration metric: took 3.264804ms for default service account to be created ...
	I0829 19:41:07.079263   78865 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:41:07.083981   78865 system_pods.go:86] 9 kube-system pods found
	I0829 19:41:07.084006   78865 system_pods.go:89] "coredns-6f6b679f8f-wr7bq" [ac054ab5-3a0e-433e-add6-5817ce6f1c27] Running
	I0829 19:41:07.084014   78865 system_pods.go:89] "coredns-6f6b679f8f-xbfb6" [a94d281f-1fdb-4e33-a060-17cd5981462c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:41:07.084019   78865 system_pods.go:89] "etcd-no-preload-690795" [a13c0d5b-fb99-4c57-8401-b0580a0e97a7] Running
	I0829 19:41:07.084025   78865 system_pods.go:89] "kube-apiserver-no-preload-690795" [1ec66a25-5b6d-4b81-8c44-b4dffad1ec12] Running
	I0829 19:41:07.084029   78865 system_pods.go:89] "kube-controller-manager-no-preload-690795" [d670e6eb-006f-4fce-a72f-f266fda72ccc] Running
	I0829 19:41:07.084032   78865 system_pods.go:89] "kube-proxy-p7zvh" [14f4576d-3d3e-4848-9350-3348293318aa] Running
	I0829 19:41:07.084037   78865 system_pods.go:89] "kube-scheduler-no-preload-690795" [1b4dfa7f-b043-4a38-a250-2e51eabf1b33] Running
	I0829 19:41:07.084042   78865 system_pods.go:89] "metrics-server-6867b74b74-shs88" [cd53f408-7f8a-40ae-93f3-7a00c8ae6646] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:41:07.084045   78865 system_pods.go:89] "storage-provisioner" [df10c563-06d8-48f8-a6e4-35837195a25d] Running
	I0829 19:41:07.084052   78865 system_pods.go:126] duration metric: took 4.784448ms to wait for k8s-apps to be running ...
	I0829 19:41:07.084062   78865 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:41:07.084104   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:07.098513   78865 system_svc.go:56] duration metric: took 14.440998ms WaitForService to wait for kubelet
	I0829 19:41:07.098551   78865 kubeadm.go:582] duration metric: took 5.351518255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:41:07.098574   78865 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:41:07.231160   78865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:41:07.231189   78865 node_conditions.go:123] node cpu capacity is 2
	I0829 19:41:07.231200   78865 node_conditions.go:105] duration metric: took 132.62068ms to run NodePressure ...
	I0829 19:41:07.231209   78865 start.go:241] waiting for startup goroutines ...
	I0829 19:41:07.231216   78865 start.go:246] waiting for cluster config update ...
	I0829 19:41:07.231225   78865 start.go:255] writing updated cluster config ...
	I0829 19:41:07.231503   78865 ssh_runner.go:195] Run: rm -f paused
	I0829 19:41:07.283204   78865 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:41:07.284751   78865 out.go:177] * Done! kubectl is now configured to use "no-preload-690795" cluster and "default" namespace by default
	I0829 19:41:40.770350   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:40.770652   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:40.770684   79869 kubeadm.go:310] 
	I0829 19:41:40.770740   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:41:40.770802   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:41:40.770818   79869 kubeadm.go:310] 
	I0829 19:41:40.770862   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:41:40.770917   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:41:40.771047   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:41:40.771057   79869 kubeadm.go:310] 
	I0829 19:41:40.771202   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:41:40.771254   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:41:40.771309   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:41:40.771320   79869 kubeadm.go:310] 
	I0829 19:41:40.771447   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:41:40.771565   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:41:40.771576   79869 kubeadm.go:310] 
	I0829 19:41:40.771675   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:41:40.771776   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:41:40.771900   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:41:40.771997   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:41:40.772010   79869 kubeadm.go:310] 
	I0829 19:41:40.772984   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:41:40.773093   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:41:40.773213   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 19:41:40.773353   79869 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 19:41:40.773398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:41:41.224263   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:41.239310   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:41:41.249121   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:41:41.249142   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:41:41.249195   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:41:41.258534   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:41:41.258591   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:41:41.267814   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:41:41.276813   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:41:41.276871   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:41:41.286937   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.296364   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:41:41.296435   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.306574   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:41:41.315824   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:41:41.315899   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:41:41.325290   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:41:41.389915   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:41:41.390071   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:41:41.529956   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:41:41.530108   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:41:41.530226   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:41:41.709310   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:41:41.711945   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:41:41.712051   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:41:41.712127   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:41:41.712225   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:41:41.712308   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:41:41.712402   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:41:41.712466   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:41:41.712551   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:41:41.712622   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:41:41.712727   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:41:41.712831   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:41:41.712865   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:41:41.712912   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:41:41.790778   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:41:41.993240   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:41:42.180389   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:41:42.248561   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:41:42.272297   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:41:42.273147   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:41:42.273249   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:41:42.421783   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:41:42.424669   79869 out.go:235]   - Booting up control plane ...
	I0829 19:41:42.424781   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:41:42.434145   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:41:42.437026   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:41:42.437823   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:41:42.441047   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:42:22.439545   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:42:22.439898   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:22.440093   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:27.439985   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:27.440226   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:37.440067   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:37.440333   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:57.439710   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:57.439891   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.439862   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:43:37.440057   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.440081   79869 kubeadm.go:310] 
	I0829 19:43:37.440118   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:43:37.440173   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:43:37.440181   79869 kubeadm.go:310] 
	I0829 19:43:37.440213   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:43:37.440265   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:43:37.440376   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:43:37.440384   79869 kubeadm.go:310] 
	I0829 19:43:37.440503   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:43:37.440551   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:43:37.440605   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:43:37.440618   79869 kubeadm.go:310] 
	I0829 19:43:37.440763   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:43:37.440893   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:43:37.440904   79869 kubeadm.go:310] 
	I0829 19:43:37.441013   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:43:37.441146   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:43:37.441255   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:43:37.441367   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:43:37.441380   79869 kubeadm.go:310] 
	I0829 19:43:37.441848   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:43:37.441958   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:43:37.442039   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 19:43:37.442126   79869 kubeadm.go:394] duration metric: took 8m1.388269811s to StartCluster
	I0829 19:43:37.442174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:43:37.442230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:43:37.483512   79869 cri.go:89] found id: ""
	I0829 19:43:37.483544   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.483554   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:43:37.483560   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:43:37.483617   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:43:37.518325   79869 cri.go:89] found id: ""
	I0829 19:43:37.518353   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.518361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:43:37.518368   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:43:37.518426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:43:37.554541   79869 cri.go:89] found id: ""
	I0829 19:43:37.554563   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.554574   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:43:37.554582   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:43:37.554650   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:43:37.589041   79869 cri.go:89] found id: ""
	I0829 19:43:37.589069   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.589076   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:43:37.589083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:43:37.589132   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:43:37.624451   79869 cri.go:89] found id: ""
	I0829 19:43:37.624479   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.624491   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:43:37.624499   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:43:37.624554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:43:37.660162   79869 cri.go:89] found id: ""
	I0829 19:43:37.660186   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.660193   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:43:37.660199   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:43:37.660249   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:43:37.696806   79869 cri.go:89] found id: ""
	I0829 19:43:37.696836   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.696844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:43:37.696850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:43:37.696898   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:43:37.732828   79869 cri.go:89] found id: ""
	I0829 19:43:37.732851   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.732860   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:43:37.732871   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:43:37.732887   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:43:37.772219   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:43:37.772247   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:43:37.823967   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:43:37.824003   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:43:37.838884   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:43:37.838906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:43:37.915184   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:43:37.915206   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:43:37.915222   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0829 19:43:38.020759   79869 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 19:43:38.020827   79869 out.go:270] * 
	W0829 19:43:38.020882   79869 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.020897   79869 out.go:270] * 
	W0829 19:43:38.021777   79869 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:43:38.024855   79869 out.go:201] 
	W0829 19:43:38.025860   79869 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.025905   79869 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 19:43:38.025936   79869 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 19:43:38.027175   79869 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.590469022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960945590449741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5772a6c8-16d9-4b00-bdb7-140ba18d00c4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.590852377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee573d4a-b94a-4de9-b2ef-5df465a172a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.590944080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee573d4a-b94a-4de9-b2ef-5df465a172a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.591153668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c8cd20fb8775b46859df2e4a9f52f38ebbb779f961969c09b46bcb99ecc53dc,PodSandboxId:4a51e94ded92d0007f926dc7351992c3b4cede4002c8519a0c81caedb1765d66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960396615390999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 741481e5-8e38-4522-a9df-4b36e6d5cf9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d756e81dd539c675eb318993c70ab41462642f1b5453597bf056e58e2c988c8,PodSandboxId:b5b83094e8553b450df88f995847154baeb3f06ffe6fff4200a819752cab6b9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396454912739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9f75n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f3b51d-fced-4cd9-8c43-2a4eea28e470,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de983bf227ed9f2dd5b0374edd8200137a94287e4ebe645f27aa0a425ac995c3,PodSandboxId:b0986a8b7cde5aa9b4d174a33053a6f7dc8d9c7b0302502273eb3119cf087a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396380076731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8qrn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
f312704-4ea9-432d-85b2-67c59231187f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c825e53ea420a2d2461659e636fcb315c235fa942574960cc9af80f6c6a55c,PodSandboxId:29fc6b729b9b04b6cf152b7c2335c99d46d5e87a181075281b164d3fcb4434bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724960395556584259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25cmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ecfe58-b448-4db0-b4cc-434422ec4ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb08aba65a9c31d41df72b4f587dc8edf9b8e9aacef08ea30d8f916f4441664f,PodSandboxId:dcb65c7deae1da193eb3304a35cc32ac1fcbc3d02e6994e0f6739c00204e1021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960384884503776,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8043cb3a2563888629db8873a8265d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bf84f946f258f55baaf4ba337befe755f00da501a66468563afd31117ad426,PodSandboxId:17e2a8fcd578489f8b72593ad3234e5afcaad86e789f1da3d3415fc1cd8336c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960384845749702,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9403733df9a120c418b7b08ac7bdfa69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0793eb009f9d3fc92171f8acb7bd3a5f4cf639eb8d4499658e7c03b33fa027a4,PodSandboxId:8015eca353e55241d3acfc7efe93575b73ac72bd619b11e7ce7b634d69722b1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960384844115506,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40e977c7ae3bb7d4c4751e14efb0569,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237dcc747150f1f03920ce8cc96e6032a91caaab5c5c7d4d3b0a266570d6e79c,PodSandboxId:5ca3436db099bce2554cda0be5ca34434aa491689c6951047f4b9b21d952cc1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960384823025053,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d9ba1547d65f5bcc1780584a56df69ff861f6613de7d4d4c5c49bc19c34904,PodSandboxId:f7441c62737ea4ae3fa0a164ac96585e88b511af83478ff76b57246193ba296d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960099495647531,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee573d4a-b94a-4de9-b2ef-5df465a172a3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.626712213Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ed65cdb-c5e9-4b9b-ace5-48ddf68a18da name=/runtime.v1.RuntimeService/Version
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.626802697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ed65cdb-c5e9-4b9b-ace5-48ddf68a18da name=/runtime.v1.RuntimeService/Version
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.627591532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b2edbca-c676-4676-86a1-900e0039e070 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.628164559Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960945628121873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b2edbca-c676-4676-86a1-900e0039e070 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.628667540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=039e8fe7-2e1d-4c23-b081-8911e24c2325 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.628731033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=039e8fe7-2e1d-4c23-b081-8911e24c2325 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.628962408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c8cd20fb8775b46859df2e4a9f52f38ebbb779f961969c09b46bcb99ecc53dc,PodSandboxId:4a51e94ded92d0007f926dc7351992c3b4cede4002c8519a0c81caedb1765d66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960396615390999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 741481e5-8e38-4522-a9df-4b36e6d5cf9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d756e81dd539c675eb318993c70ab41462642f1b5453597bf056e58e2c988c8,PodSandboxId:b5b83094e8553b450df88f995847154baeb3f06ffe6fff4200a819752cab6b9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396454912739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9f75n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f3b51d-fced-4cd9-8c43-2a4eea28e470,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de983bf227ed9f2dd5b0374edd8200137a94287e4ebe645f27aa0a425ac995c3,PodSandboxId:b0986a8b7cde5aa9b4d174a33053a6f7dc8d9c7b0302502273eb3119cf087a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396380076731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8qrn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
f312704-4ea9-432d-85b2-67c59231187f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c825e53ea420a2d2461659e636fcb315c235fa942574960cc9af80f6c6a55c,PodSandboxId:29fc6b729b9b04b6cf152b7c2335c99d46d5e87a181075281b164d3fcb4434bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724960395556584259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25cmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ecfe58-b448-4db0-b4cc-434422ec4ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb08aba65a9c31d41df72b4f587dc8edf9b8e9aacef08ea30d8f916f4441664f,PodSandboxId:dcb65c7deae1da193eb3304a35cc32ac1fcbc3d02e6994e0f6739c00204e1021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960384884503776,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8043cb3a2563888629db8873a8265d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bf84f946f258f55baaf4ba337befe755f00da501a66468563afd31117ad426,PodSandboxId:17e2a8fcd578489f8b72593ad3234e5afcaad86e789f1da3d3415fc1cd8336c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960384845749702,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9403733df9a120c418b7b08ac7bdfa69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0793eb009f9d3fc92171f8acb7bd3a5f4cf639eb8d4499658e7c03b33fa027a4,PodSandboxId:8015eca353e55241d3acfc7efe93575b73ac72bd619b11e7ce7b634d69722b1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960384844115506,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40e977c7ae3bb7d4c4751e14efb0569,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237dcc747150f1f03920ce8cc96e6032a91caaab5c5c7d4d3b0a266570d6e79c,PodSandboxId:5ca3436db099bce2554cda0be5ca34434aa491689c6951047f4b9b21d952cc1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960384823025053,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d9ba1547d65f5bcc1780584a56df69ff861f6613de7d4d4c5c49bc19c34904,PodSandboxId:f7441c62737ea4ae3fa0a164ac96585e88b511af83478ff76b57246193ba296d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960099495647531,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=039e8fe7-2e1d-4c23-b081-8911e24c2325 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.663779531Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15c585dd-72e6-4232-9913-cf15fec18d77 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.663867736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15c585dd-72e6-4232-9913-cf15fec18d77 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.665262699Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb47e222-341e-4e9b-be43-fa9ce9308339 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.665680063Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960945665654409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb47e222-341e-4e9b-be43-fa9ce9308339 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.666373341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2684fa62-cee4-4d87-9049-9c06ea3f5a3c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.666443378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2684fa62-cee4-4d87-9049-9c06ea3f5a3c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.666872725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c8cd20fb8775b46859df2e4a9f52f38ebbb779f961969c09b46bcb99ecc53dc,PodSandboxId:4a51e94ded92d0007f926dc7351992c3b4cede4002c8519a0c81caedb1765d66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960396615390999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 741481e5-8e38-4522-a9df-4b36e6d5cf9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d756e81dd539c675eb318993c70ab41462642f1b5453597bf056e58e2c988c8,PodSandboxId:b5b83094e8553b450df88f995847154baeb3f06ffe6fff4200a819752cab6b9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396454912739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9f75n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f3b51d-fced-4cd9-8c43-2a4eea28e470,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de983bf227ed9f2dd5b0374edd8200137a94287e4ebe645f27aa0a425ac995c3,PodSandboxId:b0986a8b7cde5aa9b4d174a33053a6f7dc8d9c7b0302502273eb3119cf087a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396380076731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8qrn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
f312704-4ea9-432d-85b2-67c59231187f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c825e53ea420a2d2461659e636fcb315c235fa942574960cc9af80f6c6a55c,PodSandboxId:29fc6b729b9b04b6cf152b7c2335c99d46d5e87a181075281b164d3fcb4434bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724960395556584259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25cmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ecfe58-b448-4db0-b4cc-434422ec4ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb08aba65a9c31d41df72b4f587dc8edf9b8e9aacef08ea30d8f916f4441664f,PodSandboxId:dcb65c7deae1da193eb3304a35cc32ac1fcbc3d02e6994e0f6739c00204e1021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960384884503776,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8043cb3a2563888629db8873a8265d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bf84f946f258f55baaf4ba337befe755f00da501a66468563afd31117ad426,PodSandboxId:17e2a8fcd578489f8b72593ad3234e5afcaad86e789f1da3d3415fc1cd8336c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960384845749702,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9403733df9a120c418b7b08ac7bdfa69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0793eb009f9d3fc92171f8acb7bd3a5f4cf639eb8d4499658e7c03b33fa027a4,PodSandboxId:8015eca353e55241d3acfc7efe93575b73ac72bd619b11e7ce7b634d69722b1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960384844115506,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40e977c7ae3bb7d4c4751e14efb0569,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237dcc747150f1f03920ce8cc96e6032a91caaab5c5c7d4d3b0a266570d6e79c,PodSandboxId:5ca3436db099bce2554cda0be5ca34434aa491689c6951047f4b9b21d952cc1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960384823025053,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d9ba1547d65f5bcc1780584a56df69ff861f6613de7d4d4c5c49bc19c34904,PodSandboxId:f7441c62737ea4ae3fa0a164ac96585e88b511af83478ff76b57246193ba296d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960099495647531,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2684fa62-cee4-4d87-9049-9c06ea3f5a3c name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.698621454Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6b967af-2692-4cf4-b3ae-77a3824d8096 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.698705166Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6b967af-2692-4cf4-b3ae-77a3824d8096 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.700479807Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3618fb83-e23e-468f-a9ef-c3b7dcb66655 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.701068005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960945700875570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3618fb83-e23e-468f-a9ef-c3b7dcb66655 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.701622830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=975515a1-6f85-4c00-9163-b487e42ce90f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.701706030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=975515a1-6f85-4c00-9163-b487e42ce90f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:05 embed-certs-920571 crio[710]: time="2024-08-29 19:49:05.701951580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c8cd20fb8775b46859df2e4a9f52f38ebbb779f961969c09b46bcb99ecc53dc,PodSandboxId:4a51e94ded92d0007f926dc7351992c3b4cede4002c8519a0c81caedb1765d66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960396615390999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 741481e5-8e38-4522-a9df-4b36e6d5cf9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d756e81dd539c675eb318993c70ab41462642f1b5453597bf056e58e2c988c8,PodSandboxId:b5b83094e8553b450df88f995847154baeb3f06ffe6fff4200a819752cab6b9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396454912739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9f75n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f3b51d-fced-4cd9-8c43-2a4eea28e470,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de983bf227ed9f2dd5b0374edd8200137a94287e4ebe645f27aa0a425ac995c3,PodSandboxId:b0986a8b7cde5aa9b4d174a33053a6f7dc8d9c7b0302502273eb3119cf087a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396380076731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8qrn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
f312704-4ea9-432d-85b2-67c59231187f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c825e53ea420a2d2461659e636fcb315c235fa942574960cc9af80f6c6a55c,PodSandboxId:29fc6b729b9b04b6cf152b7c2335c99d46d5e87a181075281b164d3fcb4434bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724960395556584259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25cmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ecfe58-b448-4db0-b4cc-434422ec4ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb08aba65a9c31d41df72b4f587dc8edf9b8e9aacef08ea30d8f916f4441664f,PodSandboxId:dcb65c7deae1da193eb3304a35cc32ac1fcbc3d02e6994e0f6739c00204e1021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960384884503776,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8043cb3a2563888629db8873a8265d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bf84f946f258f55baaf4ba337befe755f00da501a66468563afd31117ad426,PodSandboxId:17e2a8fcd578489f8b72593ad3234e5afcaad86e789f1da3d3415fc1cd8336c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960384845749702,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9403733df9a120c418b7b08ac7bdfa69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0793eb009f9d3fc92171f8acb7bd3a5f4cf639eb8d4499658e7c03b33fa027a4,PodSandboxId:8015eca353e55241d3acfc7efe93575b73ac72bd619b11e7ce7b634d69722b1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960384844115506,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40e977c7ae3bb7d4c4751e14efb0569,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237dcc747150f1f03920ce8cc96e6032a91caaab5c5c7d4d3b0a266570d6e79c,PodSandboxId:5ca3436db099bce2554cda0be5ca34434aa491689c6951047f4b9b21d952cc1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960384823025053,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d9ba1547d65f5bcc1780584a56df69ff861f6613de7d4d4c5c49bc19c34904,PodSandboxId:f7441c62737ea4ae3fa0a164ac96585e88b511af83478ff76b57246193ba296d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960099495647531,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=975515a1-6f85-4c00-9163-b487e42ce90f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c8cd20fb8775       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   4a51e94ded92d       storage-provisioner
	5d756e81dd539       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   b5b83094e8553       coredns-6f6b679f8f-9f75n
	de983bf227ed9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   b0986a8b7cde5       coredns-6f6b679f8f-8qrn6
	72c825e53ea42       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   29fc6b729b9b0       kube-proxy-25cmq
	eb08aba65a9c3       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   dcb65c7deae1d       kube-controller-manager-embed-certs-920571
	26bf84f946f25       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   17e2a8fcd5784       etcd-embed-certs-920571
	0793eb009f9d3       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   8015eca353e55       kube-scheduler-embed-certs-920571
	237dcc747150f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   5ca3436db099b       kube-apiserver-embed-certs-920571
	e8d9ba1547d65       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   f7441c62737ea       kube-apiserver-embed-certs-920571
	
	
	==> coredns [5d756e81dd539c675eb318993c70ab41462642f1b5453597bf056e58e2c988c8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [de983bf227ed9f2dd5b0374edd8200137a94287e4ebe645f27aa0a425ac995c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-920571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-920571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=embed-certs-920571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_39_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:39:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-920571
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:49:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:45:06 +0000   Thu, 29 Aug 2024 19:39:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:45:06 +0000   Thu, 29 Aug 2024 19:39:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:45:06 +0000   Thu, 29 Aug 2024 19:39:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:45:06 +0000   Thu, 29 Aug 2024 19:39:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.243
	  Hostname:    embed-certs-920571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 85be8983c3db432aa3105d0a59604c10
	  System UUID:                85be8983-c3db-432a-a310-5d0a59604c10
	  Boot ID:                    11f022a9-6b03-438a-9ef5-3b96d6649273
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-8qrn6                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-6f6b679f8f-9f75n                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-embed-certs-920571                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-embed-certs-920571             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-embed-certs-920571    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-25cmq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-embed-certs-920571             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-kb2c6               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m9s   kube-proxy       
	  Normal  Starting                 9m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s  kubelet          Node embed-certs-920571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s  kubelet          Node embed-certs-920571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s  kubelet          Node embed-certs-920571 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s  node-controller  Node embed-certs-920571 event: Registered Node embed-certs-920571 in Controller
	  Normal  CIDRAssignmentFailed     9m11s  cidrAllocator    Node embed-certs-920571 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.050446] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035952] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.694086] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.914496] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.518927] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.935828] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.056566] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058678] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.174074] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.137591] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.281986] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +3.985971] systemd-fstab-generator[792]: Ignoring "noauto" option for root device
	[  +2.298735] systemd-fstab-generator[912]: Ignoring "noauto" option for root device
	[  +0.063002] kauditd_printk_skb: 158 callbacks suppressed
	[Aug29 19:35] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.828767] kauditd_printk_skb: 85 callbacks suppressed
	[Aug29 19:39] systemd-fstab-generator[2535]: Ignoring "noauto" option for root device
	[  +0.060698] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.990032] systemd-fstab-generator[2857]: Ignoring "noauto" option for root device
	[  +0.088072] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.793146] systemd-fstab-generator[2970]: Ignoring "noauto" option for root device
	[  +0.659371] kauditd_printk_skb: 34 callbacks suppressed
	[Aug29 19:40] kauditd_printk_skb: 64 callbacks suppressed
	
	
	==> etcd [26bf84f946f258f55baaf4ba337befe755f00da501a66468563afd31117ad426] <==
	{"level":"info","ts":"2024-08-29T19:39:45.345241Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:39:45.339196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f switched to configuration voters=(8092916432911584799)"}
	{"level":"info","ts":"2024-08-29T19:39:45.345424Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"29cc905037b78c6d","local-member-id":"704fd09e1c9dce1f","added-peer-id":"704fd09e1c9dce1f","added-peer-peer-urls":["https://192.168.61.243:2380"]}
	{"level":"info","ts":"2024-08-29T19:39:45.339257Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.243:2380"}
	{"level":"info","ts":"2024-08-29T19:39:45.347982Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.243:2380"}
	{"level":"info","ts":"2024-08-29T19:39:45.754970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-29T19:39:45.755129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-29T19:39:45.755190Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f received MsgPreVoteResp from 704fd09e1c9dce1f at term 1"}
	{"level":"info","ts":"2024-08-29T19:39:45.755231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became candidate at term 2"}
	{"level":"info","ts":"2024-08-29T19:39:45.755264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f received MsgVoteResp from 704fd09e1c9dce1f at term 2"}
	{"level":"info","ts":"2024-08-29T19:39:45.755300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became leader at term 2"}
	{"level":"info","ts":"2024-08-29T19:39:45.755332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 704fd09e1c9dce1f elected leader 704fd09e1c9dce1f at term 2"}
	{"level":"info","ts":"2024-08-29T19:39:45.760126Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"704fd09e1c9dce1f","local-member-attributes":"{Name:embed-certs-920571 ClientURLs:[https://192.168.61.243:2379]}","request-path":"/0/members/704fd09e1c9dce1f/attributes","cluster-id":"29cc905037b78c6d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:39:45.762033Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:39:45.762127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:39:45.762368Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:39:45.762406Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T19:39:45.762472Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:39:45.763110Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:39:45.766981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"29cc905037b78c6d","local-member-id":"704fd09e1c9dce1f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:39:45.767091Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:39:45.767133Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:39:45.767809Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:39:45.768543Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T19:39:45.779975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.243:2379"}
	
	
	==> kernel <==
	 19:49:06 up 14 min,  0 users,  load average: 0.13, 0.24, 0.18
	Linux embed-certs-920571 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [237dcc747150f1f03920ce8cc96e6032a91caaab5c5c7d4d3b0a266570d6e79c] <==
	W0829 19:44:48.424249       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:44:48.424345       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 19:44:48.425297       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:44:48.425448       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:45:48.426098       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:45:48.426181       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0829 19:45:48.426140       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:45:48.426422       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 19:45:48.427310       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:45:48.428443       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:47:48.428038       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:47:48.428128       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 19:47:48.429248       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:47:48.429342       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:47:48.429412       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 19:47:48.430600       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [e8d9ba1547d65f5bcc1780584a56df69ff861f6613de7d4d4c5c49bc19c34904] <==
	W0829 19:39:39.611316       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.620061       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.695042       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.719858       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.735779       1 logging.go:55] [core] [Channel #20 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.758191       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.835180       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.875467       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.886181       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.888522       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.946646       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.991614       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.991725       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.020639       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.043657       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.055568       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.063230       1 logging.go:55] [core] [Channel #13 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.153111       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.180985       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.213237       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.243994       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.244198       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.336373       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.477398       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.813183       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [eb08aba65a9c31d41df72b4f587dc8edf9b8e9aacef08ea30d8f916f4441664f] <==
	E0829 19:43:54.353350       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:43:54.940787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:44:24.359213       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:44:24.949329       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:44:54.365500       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:44:54.962213       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:45:06.721279       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-920571"
	E0829 19:45:24.371830       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:45:24.970460       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:45:54.378973       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:45:54.977854       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:46:03.212926       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="266.191µs"
	I0829 19:46:15.211864       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="130.686µs"
	E0829 19:46:24.384831       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:46:24.986092       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:46:54.392064       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:46:55.000420       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:47:24.398526       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:47:25.009442       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:47:54.406466       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:47:55.017088       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:48:24.412584       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:48:25.023619       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:48:54.420383       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:48:55.033851       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [72c825e53ea420a2d2461659e636fcb315c235fa942574960cc9af80f6c6a55c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:39:56.174641       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:39:56.265129       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.243"]
	E0829 19:39:56.273040       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:39:56.518146       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:39:56.518206       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:39:56.518235       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:39:56.532020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:39:56.532248       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:39:56.532259       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:39:56.534036       1 config.go:197] "Starting service config controller"
	I0829 19:39:56.534059       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:39:56.534079       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:39:56.534088       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:39:56.534514       1 config.go:326] "Starting node config controller"
	I0829 19:39:56.534522       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:39:56.636022       1 shared_informer.go:320] Caches are synced for node config
	I0829 19:39:56.636049       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:39:56.636077       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0793eb009f9d3fc92171f8acb7bd3a5f4cf639eb8d4499658e7c03b33fa027a4] <==
	W0829 19:39:47.458611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 19:39:47.459854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:47.460546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 19:39:47.460577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.261263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 19:39:48.261409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.306386       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 19:39:48.306434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.307286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 19:39:48.307328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.430464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 19:39:48.430519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.507772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 19:39:48.507822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.553646       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 19:39:48.553735       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0829 19:39:48.568081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0829 19:39:48.568125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.688603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 19:39:48.688674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.712472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 19:39:48.712525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.740668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 19:39:48.740772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0829 19:39:50.446857       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:47:57 embed-certs-920571 kubelet[2864]: E0829 19:47:57.196813    2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kb2c6" podUID="8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9"
	Aug 29 19:48:00 embed-certs-920571 kubelet[2864]: E0829 19:48:00.308015    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960880307216593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:00 embed-certs-920571 kubelet[2864]: E0829 19:48:00.308082    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960880307216593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:10 embed-certs-920571 kubelet[2864]: E0829 19:48:10.313544    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960890313059838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:10 embed-certs-920571 kubelet[2864]: E0829 19:48:10.314816    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960890313059838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:12 embed-certs-920571 kubelet[2864]: E0829 19:48:12.196601    2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kb2c6" podUID="8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9"
	Aug 29 19:48:20 embed-certs-920571 kubelet[2864]: E0829 19:48:20.317061    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960900316660273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:20 embed-certs-920571 kubelet[2864]: E0829 19:48:20.317338    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960900316660273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:24 embed-certs-920571 kubelet[2864]: E0829 19:48:24.197694    2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kb2c6" podUID="8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9"
	Aug 29 19:48:30 embed-certs-920571 kubelet[2864]: E0829 19:48:30.319302    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960910318844850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:30 embed-certs-920571 kubelet[2864]: E0829 19:48:30.319334    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960910318844850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:36 embed-certs-920571 kubelet[2864]: E0829 19:48:36.196927    2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kb2c6" podUID="8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9"
	Aug 29 19:48:40 embed-certs-920571 kubelet[2864]: E0829 19:48:40.321124    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960920320737088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:40 embed-certs-920571 kubelet[2864]: E0829 19:48:40.321177    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960920320737088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:47 embed-certs-920571 kubelet[2864]: E0829 19:48:47.197343    2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kb2c6" podUID="8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9"
	Aug 29 19:48:50 embed-certs-920571 kubelet[2864]: E0829 19:48:50.229768    2864 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:48:50 embed-certs-920571 kubelet[2864]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:48:50 embed-certs-920571 kubelet[2864]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:48:50 embed-certs-920571 kubelet[2864]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:48:50 embed-certs-920571 kubelet[2864]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:48:50 embed-certs-920571 kubelet[2864]: E0829 19:48:50.322816    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960930322423478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:50 embed-certs-920571 kubelet[2864]: E0829 19:48:50.322839    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960930322423478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:00 embed-certs-920571 kubelet[2864]: E0829 19:49:00.323930    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960940323583931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:00 embed-certs-920571 kubelet[2864]: E0829 19:49:00.323970    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960940323583931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:02 embed-certs-920571 kubelet[2864]: E0829 19:49:02.197634    2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kb2c6" podUID="8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9"
	
	
	==> storage-provisioner [8c8cd20fb8775b46859df2e4a9f52f38ebbb779f961969c09b46bcb99ecc53dc] <==
	I0829 19:39:56.833669       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 19:39:56.850614       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 19:39:56.850754       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 19:39:56.865323       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 19:39:56.865537       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-920571_1273688a-6773-4476-8330-4dc5dd3490c5!
	I0829 19:39:56.865636       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1414218a-6002-4eea-bfcc-2d73fa2d7d66", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-920571_1273688a-6773-4476-8330-4dc5dd3490c5 became leader
	I0829 19:39:56.965952       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-920571_1273688a-6773-4476-8330-4dc5dd3490c5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-920571 -n embed-certs-920571
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-920571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-kb2c6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-920571 describe pod metrics-server-6867b74b74-kb2c6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-920571 describe pod metrics-server-6867b74b74-kb2c6: exit status 1 (62.233392ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-kb2c6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-920571 describe pod metrics-server-6867b74b74-kb2c6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0829 19:40:49.950227   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-29 19:49:22.660311279 +0000 UTC m=+6235.381104063
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-672127 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-672127 logs -n 25: (2.033884028s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-633326 sudo cat                              | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo find                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo crio                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-633326                                       | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	| delete  | -p                                                     | disable-driver-mounts-831934 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | disable-driver-mounts-831934                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:28 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-690795             | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-920571            | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-672127  | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC | 29 Aug 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC |                     |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-690795                  | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC | 29 Aug 24 19:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-920571                 | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC | 29 Aug 24 19:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467349        | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-672127       | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:40 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467349             | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:31:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:31:58.737382   79869 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:31:58.737475   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737483   79869 out.go:358] Setting ErrFile to fd 2...
	I0829 19:31:58.737486   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737664   79869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:31:58.738216   79869 out.go:352] Setting JSON to false
	I0829 19:31:58.739096   79869 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8066,"bootTime":1724951853,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:31:58.739164   79869 start.go:139] virtualization: kvm guest
	I0829 19:31:58.741047   79869 out.go:177] * [old-k8s-version-467349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:31:58.742202   79869 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:31:58.742202   79869 notify.go:220] Checking for updates...
	I0829 19:31:58.744035   79869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:31:58.745212   79869 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:31:58.746330   79869 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:31:58.747599   79869 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:31:58.748625   79869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:31:58.749897   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:31:58.750353   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.750402   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.765128   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I0829 19:31:58.765502   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.765933   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.765952   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.766302   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.766478   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.768195   79869 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0829 19:31:58.769230   79869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:31:58.769562   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.769599   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.783969   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42955
	I0829 19:31:58.784329   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.784794   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.784809   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.785130   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.785335   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.821467   79869 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:31:58.822695   79869 start.go:297] selected driver: kvm2
	I0829 19:31:58.822708   79869 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.822845   79869 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:31:58.823799   79869 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.823887   79869 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:31:58.839098   79869 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:31:58.839445   79869 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:31:58.839504   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:31:58.839519   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:31:58.839556   79869 start.go:340] cluster config:
	{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.839650   79869 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.841263   79869 out.go:177] * Starting "old-k8s-version-467349" primary control-plane node in "old-k8s-version-467349" cluster
	I0829 19:31:58.842265   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:31:58.842301   79869 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:31:58.842310   79869 cache.go:56] Caching tarball of preloaded images
	I0829 19:31:58.842386   79869 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:31:58.842396   79869 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0829 19:31:58.842476   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:31:58.842637   79869 start.go:360] acquireMachinesLock for old-k8s-version-467349: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:32:00.606343   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:03.678411   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:09.758354   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:12.830416   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:18.910387   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:21.982407   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:28.062408   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:31.134407   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:37.214369   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:40.286345   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:46.366360   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:49.438406   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:55.518437   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:58.590377   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:04.670397   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:07.742436   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:13.822348   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:16.894422   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:22.974353   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:26.046337   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:32.126325   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:35.198391   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:41.278353   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:44.350421   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:50.434297   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:53.502296   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:59.582448   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:02.654443   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:08.734358   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:11.806435   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:17.886372   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:20.958351   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:27.038356   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:30.110387   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:33.114600   79073 start.go:364] duration metric: took 4m24.136110592s to acquireMachinesLock for "embed-certs-920571"
	I0829 19:34:33.114658   79073 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:34:33.114666   79073 fix.go:54] fixHost starting: 
	I0829 19:34:33.115014   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:33.115043   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:33.130652   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34641
	I0829 19:34:33.131096   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:33.131536   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:34:33.131555   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:33.131871   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:33.132060   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:33.132217   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:34:33.133784   79073 fix.go:112] recreateIfNeeded on embed-certs-920571: state=Stopped err=<nil>
	I0829 19:34:33.133809   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	W0829 19:34:33.133951   79073 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:34:33.135573   79073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-920571" ...
	I0829 19:34:33.136726   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Start
	I0829 19:34:33.136873   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring networks are active...
	I0829 19:34:33.137613   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring network default is active
	I0829 19:34:33.137909   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring network mk-embed-certs-920571 is active
	I0829 19:34:33.138400   79073 main.go:141] libmachine: (embed-certs-920571) Getting domain xml...
	I0829 19:34:33.139091   79073 main.go:141] libmachine: (embed-certs-920571) Creating domain...
	I0829 19:34:33.112327   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:34:33.112363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:34:33.112705   78865 buildroot.go:166] provisioning hostname "no-preload-690795"
	I0829 19:34:33.112736   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:34:33.112943   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:34:33.114457   78865 machine.go:96] duration metric: took 4m37.430735456s to provisionDockerMachine
	I0829 19:34:33.114505   78865 fix.go:56] duration metric: took 4m37.452542806s for fixHost
	I0829 19:34:33.114516   78865 start.go:83] releasing machines lock for "no-preload-690795", held for 4m37.452590646s
	W0829 19:34:33.114545   78865 start.go:714] error starting host: provision: host is not running
	W0829 19:34:33.114637   78865 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 19:34:33.114647   78865 start.go:729] Will try again in 5 seconds ...
	I0829 19:34:34.366249   79073 main.go:141] libmachine: (embed-certs-920571) Waiting to get IP...
	I0829 19:34:34.367233   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:34.367595   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:34.367671   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:34.367580   80412 retry.go:31] will retry after 294.1031ms: waiting for machine to come up
	I0829 19:34:34.663229   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:34.663677   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:34.663709   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:34.663624   80412 retry.go:31] will retry after 345.352879ms: waiting for machine to come up
	I0829 19:34:35.010102   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.010576   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.010604   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.010527   80412 retry.go:31] will retry after 295.49024ms: waiting for machine to come up
	I0829 19:34:35.308077   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.308580   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.308608   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.308525   80412 retry.go:31] will retry after 575.095429ms: waiting for machine to come up
	I0829 19:34:35.885400   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.885806   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.885835   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.885762   80412 retry.go:31] will retry after 524.463725ms: waiting for machine to come up
	I0829 19:34:36.411496   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:36.411840   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:36.411866   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:36.411802   80412 retry.go:31] will retry after 672.277111ms: waiting for machine to come up
	I0829 19:34:37.085978   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:37.086512   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:37.086537   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:37.086473   80412 retry.go:31] will retry after 1.185875442s: waiting for machine to come up
	I0829 19:34:38.274401   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:38.274881   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:38.274914   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:38.274827   80412 retry.go:31] will retry after 1.426721352s: waiting for machine to come up
	I0829 19:34:38.116486   78865 start.go:360] acquireMachinesLock for no-preload-690795: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:34:39.703333   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:39.703732   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:39.703756   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:39.703691   80412 retry.go:31] will retry after 1.500429564s: waiting for machine to come up
	I0829 19:34:41.206311   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:41.206854   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:41.206882   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:41.206766   80412 retry.go:31] will retry after 2.021866027s: waiting for machine to come up
	I0829 19:34:43.230915   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:43.231329   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:43.231382   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:43.231318   80412 retry.go:31] will retry after 2.415112477s: waiting for machine to come up
	I0829 19:34:45.649815   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:45.650169   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:45.650221   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:45.650140   80412 retry.go:31] will retry after 3.292956483s: waiting for machine to come up
	I0829 19:34:50.094786   79559 start.go:364] duration metric: took 3m31.488453615s to acquireMachinesLock for "default-k8s-diff-port-672127"
	I0829 19:34:50.094847   79559 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:34:50.094857   79559 fix.go:54] fixHost starting: 
	I0829 19:34:50.095330   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:50.095367   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:50.112044   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0829 19:34:50.112510   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:50.112941   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:34:50.112964   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:50.113325   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:50.113522   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:34:50.113663   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:34:50.115335   79559 fix.go:112] recreateIfNeeded on default-k8s-diff-port-672127: state=Stopped err=<nil>
	I0829 19:34:50.115378   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	W0829 19:34:50.115548   79559 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:34:50.117176   79559 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-672127" ...
	I0829 19:34:48.944274   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.944748   79073 main.go:141] libmachine: (embed-certs-920571) Found IP for machine: 192.168.61.243
	I0829 19:34:48.944776   79073 main.go:141] libmachine: (embed-certs-920571) Reserving static IP address...
	I0829 19:34:48.944793   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has current primary IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.945167   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "embed-certs-920571", mac: "52:54:00:35:28:22", ip: "192.168.61.243"} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:48.945195   79073 main.go:141] libmachine: (embed-certs-920571) Reserved static IP address: 192.168.61.243
	I0829 19:34:48.945214   79073 main.go:141] libmachine: (embed-certs-920571) DBG | skip adding static IP to network mk-embed-certs-920571 - found existing host DHCP lease matching {name: "embed-certs-920571", mac: "52:54:00:35:28:22", ip: "192.168.61.243"}
	I0829 19:34:48.945225   79073 main.go:141] libmachine: (embed-certs-920571) Waiting for SSH to be available...
	I0829 19:34:48.945236   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Getting to WaitForSSH function...
	I0829 19:34:48.947646   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.948004   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:48.948034   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.948132   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Using SSH client type: external
	I0829 19:34:48.948162   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa (-rw-------)
	I0829 19:34:48.948280   79073 main.go:141] libmachine: (embed-certs-920571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:34:48.948307   79073 main.go:141] libmachine: (embed-certs-920571) DBG | About to run SSH command:
	I0829 19:34:48.948328   79073 main.go:141] libmachine: (embed-certs-920571) DBG | exit 0
	I0829 19:34:49.073781   79073 main.go:141] libmachine: (embed-certs-920571) DBG | SSH cmd err, output: <nil>: 
	I0829 19:34:49.074184   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetConfigRaw
	I0829 19:34:49.074813   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:49.077014   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.077349   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.077369   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.077550   79073 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/config.json ...
	I0829 19:34:49.077724   79073 machine.go:93] provisionDockerMachine start ...
	I0829 19:34:49.077739   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:49.077936   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.080112   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.080448   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.080472   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.080548   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.080715   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.080853   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.080983   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.081110   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.081294   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.081306   79073 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:34:49.182232   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:34:49.182282   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.182556   79073 buildroot.go:166] provisioning hostname "embed-certs-920571"
	I0829 19:34:49.182582   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.182783   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.185368   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.185727   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.185751   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.185901   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.186077   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.186237   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.186379   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.186505   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.186721   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.186740   79073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-920571 && echo "embed-certs-920571" | sudo tee /etc/hostname
	I0829 19:34:49.300225   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-920571
	
	I0829 19:34:49.300261   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.303129   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.303497   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.303528   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.303682   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.303883   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.304061   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.304193   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.304466   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.304650   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.304667   79073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-920571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-920571/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-920571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:34:49.413678   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:34:49.413710   79073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:34:49.413765   79073 buildroot.go:174] setting up certificates
	I0829 19:34:49.413774   79073 provision.go:84] configureAuth start
	I0829 19:34:49.413786   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.414069   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:49.416618   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.416965   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.416993   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.417143   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.419308   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.419585   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.419630   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.419746   79073 provision.go:143] copyHostCerts
	I0829 19:34:49.419802   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:34:49.419820   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:34:49.419882   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:34:49.419973   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:34:49.419981   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:34:49.420005   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:34:49.420055   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:34:49.420063   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:34:49.420083   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:34:49.420129   79073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.embed-certs-920571 san=[127.0.0.1 192.168.61.243 embed-certs-920571 localhost minikube]
	I0829 19:34:49.488345   79073 provision.go:177] copyRemoteCerts
	I0829 19:34:49.488396   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:34:49.488418   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.490954   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.491290   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.491328   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.491473   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.491667   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.491794   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.491932   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:49.571847   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:34:49.594401   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 19:34:49.615988   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:34:49.638030   79073 provision.go:87] duration metric: took 224.241128ms to configureAuth
	I0829 19:34:49.638058   79073 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:34:49.638251   79073 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:34:49.638342   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.640876   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.641214   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.641244   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.641439   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.641662   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.641818   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.641941   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.642126   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.642292   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.642307   79073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:34:49.862247   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:34:49.862276   79073 machine.go:96] duration metric: took 784.541058ms to provisionDockerMachine
	I0829 19:34:49.862286   79073 start.go:293] postStartSetup for "embed-certs-920571" (driver="kvm2")
	I0829 19:34:49.862296   79073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:34:49.862325   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:49.862632   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:34:49.862660   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.865463   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.865871   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.865899   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.866068   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.866285   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.866459   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.866644   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:49.948826   79073 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:34:49.952779   79073 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:34:49.952800   79073 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:34:49.952858   79073 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:34:49.952935   79073 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:34:49.953034   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:34:49.962083   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:34:49.986910   79073 start.go:296] duration metric: took 124.612025ms for postStartSetup
	I0829 19:34:49.986944   79073 fix.go:56] duration metric: took 16.872279139s for fixHost
	I0829 19:34:49.986964   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.989581   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.989919   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.989946   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.990080   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.990281   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.990519   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.990662   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.990835   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.991009   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.991020   79073 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:34:50.094598   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960090.067799977
	
	I0829 19:34:50.094618   79073 fix.go:216] guest clock: 1724960090.067799977
	I0829 19:34:50.094626   79073 fix.go:229] Guest: 2024-08-29 19:34:50.067799977 +0000 UTC Remote: 2024-08-29 19:34:49.98694779 +0000 UTC m=+281.148944887 (delta=80.852187ms)
	I0829 19:34:50.094667   79073 fix.go:200] guest clock delta is within tolerance: 80.852187ms
	I0829 19:34:50.094672   79073 start.go:83] releasing machines lock for "embed-certs-920571", held for 16.98003549s
	I0829 19:34:50.094697   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.094962   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:50.097867   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.098301   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.098331   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.098494   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099007   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099190   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099276   79073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:34:50.099322   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:50.099429   79073 ssh_runner.go:195] Run: cat /version.json
	I0829 19:34:50.099453   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:50.101909   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.101932   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102283   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.102311   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102342   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.102363   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102460   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:50.102556   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:50.102647   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:50.102717   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:50.102818   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:50.102899   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:50.102964   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:50.103032   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:50.178744   79073 ssh_runner.go:195] Run: systemctl --version
	I0829 19:34:50.220024   79073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:34:50.370308   79073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:34:50.379363   79073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:34:50.379435   79073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:34:50.394787   79073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:34:50.394810   79073 start.go:495] detecting cgroup driver to use...
	I0829 19:34:50.394886   79073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:34:50.410061   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:34:50.423846   79073 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:34:50.423910   79073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:34:50.437117   79073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:34:50.450318   79073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:34:50.563588   79073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:34:50.706261   79073 docker.go:233] disabling docker service ...
	I0829 19:34:50.706356   79073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:34:50.721443   79073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:34:50.734284   79073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:34:50.871611   79073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:34:51.006487   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:34:51.019543   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:34:51.036398   79073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:34:51.036444   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.045884   79073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:34:51.045931   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.055634   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.065379   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.075104   79073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:34:51.085560   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.095777   79073 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.114679   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.125695   79073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:34:51.135263   79073 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:34:51.135328   79073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:34:51.148534   79073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:34:51.158658   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:34:51.281185   79073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:34:51.378558   79073 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:34:51.378618   79073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:34:51.383580   79073 start.go:563] Will wait 60s for crictl version
	I0829 19:34:51.383638   79073 ssh_runner.go:195] Run: which crictl
	I0829 19:34:51.387081   79073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:34:51.426413   79073 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:34:51.426491   79073 ssh_runner.go:195] Run: crio --version
	I0829 19:34:51.453777   79073 ssh_runner.go:195] Run: crio --version
	I0829 19:34:51.481306   79073 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:34:50.118500   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Start
	I0829 19:34:50.118776   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring networks are active...
	I0829 19:34:50.119618   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring network default is active
	I0829 19:34:50.120105   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring network mk-default-k8s-diff-port-672127 is active
	I0829 19:34:50.120501   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Getting domain xml...
	I0829 19:34:50.121238   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Creating domain...
	I0829 19:34:51.414344   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting to get IP...
	I0829 19:34:51.415308   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.415709   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.415790   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:51.415692   80540 retry.go:31] will retry after 256.92247ms: waiting for machine to come up
	I0829 19:34:51.674173   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.674728   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.674754   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:51.674670   80540 retry.go:31] will retry after 338.812858ms: waiting for machine to come up
	I0829 19:34:52.015450   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.015977   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.016009   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.015920   80540 retry.go:31] will retry after 385.497306ms: waiting for machine to come up
	I0829 19:34:52.403718   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.404324   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.404361   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.404259   80540 retry.go:31] will retry after 536.615454ms: waiting for machine to come up
	I0829 19:34:52.943166   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.943709   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.943736   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.943678   80540 retry.go:31] will retry after 584.895039ms: waiting for machine to come up
	I0829 19:34:51.482485   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:51.485272   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:51.485599   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:51.485632   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:51.485803   79073 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 19:34:51.490493   79073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:34:51.505212   79073 kubeadm.go:883] updating cluster {Name:embed-certs-920571 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:34:51.505359   79073 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:34:51.505413   79073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:34:51.539415   79073 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:34:51.539485   79073 ssh_runner.go:195] Run: which lz4
	I0829 19:34:51.543107   79073 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:34:51.546831   79073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:34:51.546864   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:34:52.815579   79073 crio.go:462] duration metric: took 1.272496626s to copy over tarball
	I0829 19:34:52.815659   79073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:34:53.530873   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:53.531510   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:53.531540   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:53.531452   80540 retry.go:31] will retry after 790.882954ms: waiting for machine to come up
	I0829 19:34:54.324385   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:54.324785   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:54.324817   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:54.324706   80540 retry.go:31] will retry after 815.842176ms: waiting for machine to come up
	I0829 19:34:55.142878   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:55.143350   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:55.143375   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:55.143325   80540 retry.go:31] will retry after 1.177682749s: waiting for machine to come up
	I0829 19:34:56.322780   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:56.323215   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:56.323248   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:56.323160   80540 retry.go:31] will retry after 1.158169512s: waiting for machine to come up
	I0829 19:34:57.483529   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:57.483990   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:57.484023   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:57.483917   80540 retry.go:31] will retry after 1.631842784s: waiting for machine to come up
	I0829 19:34:54.931044   79073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.115353131s)
	I0829 19:34:54.931077   79073 crio.go:469] duration metric: took 2.115468165s to extract the tarball
	I0829 19:34:54.931086   79073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:34:54.967902   79073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:34:55.006987   79073 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:34:55.007010   79073 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:34:55.007017   79073 kubeadm.go:934] updating node { 192.168.61.243 8443 v1.31.0 crio true true} ...
	I0829 19:34:55.007123   79073 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-920571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:34:55.007187   79073 ssh_runner.go:195] Run: crio config
	I0829 19:34:55.051987   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:34:55.052016   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:34:55.052039   79073 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:34:55.052077   79073 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-920571 NodeName:embed-certs-920571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:34:55.052254   79073 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-920571"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:34:55.052337   79073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:34:55.061509   79073 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:34:55.061586   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:34:55.070182   79073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 19:34:55.086180   79073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:34:55.103184   79073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 19:34:55.119226   79073 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0829 19:34:55.122845   79073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:34:55.133782   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:34:55.266431   79073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:34:55.283043   79073 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571 for IP: 192.168.61.243
	I0829 19:34:55.283066   79073 certs.go:194] generating shared ca certs ...
	I0829 19:34:55.283081   79073 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:34:55.283237   79073 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:34:55.283287   79073 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:34:55.283297   79073 certs.go:256] generating profile certs ...
	I0829 19:34:55.283438   79073 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/client.key
	I0829 19:34:55.283519   79073 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.key.dda9dcff
	I0829 19:34:55.283573   79073 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.key
	I0829 19:34:55.283708   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:34:55.283773   79073 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:34:55.283793   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:34:55.283831   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:34:55.283869   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:34:55.283901   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:34:55.283957   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:34:55.284835   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:34:55.330384   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:34:55.366718   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:34:55.393815   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:34:55.436855   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 19:34:55.463343   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:34:55.487693   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:34:55.511657   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:34:55.536017   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:34:55.558298   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:34:55.579840   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:34:55.601271   79073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:34:55.616634   79073 ssh_runner.go:195] Run: openssl version
	I0829 19:34:55.621890   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:34:55.633224   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.637431   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.637486   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.643034   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:34:55.654607   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:34:55.666297   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.670433   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.670492   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.675787   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:34:55.686953   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:34:55.697241   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.701133   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.701189   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.706242   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:34:55.716165   79073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:34:55.720159   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:34:55.727612   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:34:55.734806   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:34:55.742352   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:34:55.749483   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:34:55.756543   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:34:55.763413   79073 kubeadm.go:392] StartCluster: {Name:embed-certs-920571 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:34:55.763499   79073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:34:55.763537   79073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:34:55.803136   79073 cri.go:89] found id: ""
	I0829 19:34:55.803219   79073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:34:55.812851   79073 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:34:55.812868   79073 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:34:55.812907   79073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:34:55.823461   79073 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:34:55.824969   79073 kubeconfig.go:125] found "embed-certs-920571" server: "https://192.168.61.243:8443"
	I0829 19:34:55.828095   79073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:34:55.838579   79073 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.243
	I0829 19:34:55.838616   79073 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:34:55.838626   79073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:34:55.838669   79073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:34:55.876618   79073 cri.go:89] found id: ""
	I0829 19:34:55.876674   79073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:34:55.893401   79073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:34:55.902557   79073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:34:55.902579   79073 kubeadm.go:157] found existing configuration files:
	
	I0829 19:34:55.902631   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:34:55.911349   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:34:55.911407   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:34:55.920377   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:34:55.928764   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:34:55.928824   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:34:55.937630   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:34:55.945836   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:34:55.945897   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:34:55.954491   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:34:55.962466   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:34:55.962517   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:34:55.971080   79073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:34:55.979709   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:56.086301   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.378119   79073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.29178222s)
	I0829 19:34:57.378153   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.574026   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.655499   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.755371   79073 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:34:57.755457   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:58.255939   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:58.755813   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.117916   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:59.118404   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:59.118427   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:59.118355   80540 retry.go:31] will retry after 2.806936823s: waiting for machine to come up
	I0829 19:35:01.927079   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:01.927450   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:35:01.927473   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:35:01.927422   80540 retry.go:31] will retry after 3.008556566s: waiting for machine to come up
	I0829 19:34:59.255536   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.756296   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.802484   79073 api_server.go:72] duration metric: took 2.047112988s to wait for apiserver process to appear ...
	I0829 19:34:59.802516   79073 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:34:59.802537   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:34:59.803088   79073 api_server.go:269] stopped: https://192.168.61.243:8443/healthz: Get "https://192.168.61.243:8443/healthz": dial tcp 192.168.61.243:8443: connect: connection refused
	I0829 19:35:00.302707   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.439793   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:02.439825   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:02.439837   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.482217   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:02.482245   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:02.802617   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.811079   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:02.811116   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:03.303128   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:03.307613   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:03.307657   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:03.803189   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:03.809164   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0829 19:35:03.816623   79073 api_server.go:141] control plane version: v1.31.0
	I0829 19:35:03.816649   79073 api_server.go:131] duration metric: took 4.014126212s to wait for apiserver health ...
	I0829 19:35:03.816657   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:35:03.816664   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:03.818484   79073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:35:03.819706   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:35:03.833365   79073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:35:03.851607   79073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:35:03.861274   79073 system_pods.go:59] 8 kube-system pods found
	I0829 19:35:03.861313   79073 system_pods.go:61] "coredns-6f6b679f8f-2wrn6" [05e03841-faab-4fd4-88c9-199b39a71ba6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:35:03.861320   79073 system_pods.go:61] "etcd-embed-certs-920571" [5545a51a-3b76-4b39-b347-6f68b8d7edbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:35:03.861328   79073 system_pods.go:61] "kube-apiserver-embed-certs-920571" [cecb3e4e-9d55-4dc9-8d14-884ffbf56475] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:35:03.861334   79073 system_pods.go:61] "kube-controller-manager-embed-certs-920571" [77e06ace-0262-418f-b41c-700aabf2fa1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:35:03.861338   79073 system_pods.go:61] "kube-proxy-hflpk" [a57a1785-8ccf-4955-b5b2-19c72032d9f5] Running
	I0829 19:35:03.861353   79073 system_pods.go:61] "kube-scheduler-embed-certs-920571" [bdb2ed9c-3bf2-4e91-b6a4-ba947dab93ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:35:03.861359   79073 system_pods.go:61] "metrics-server-6867b74b74-xs5gp" [98380519-4a65-4208-b9cc-f1941a5c2f01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:35:03.861362   79073 system_pods.go:61] "storage-provisioner" [d18a769f-283f-4db3-aad0-82fc0267980f] Running
	I0829 19:35:03.861368   79073 system_pods.go:74] duration metric: took 9.738329ms to wait for pod list to return data ...
	I0829 19:35:03.861375   79073 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:35:03.865311   79073 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:35:03.865341   79073 node_conditions.go:123] node cpu capacity is 2
	I0829 19:35:03.865355   79073 node_conditions.go:105] duration metric: took 3.974661ms to run NodePressure ...
	I0829 19:35:03.865373   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:04.939084   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:04.939532   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:35:04.939567   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:35:04.939479   80540 retry.go:31] will retry after 3.738266407s: waiting for machine to come up
	I0829 19:35:04.123411   79073 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:35:04.127613   79073 kubeadm.go:739] kubelet initialised
	I0829 19:35:04.127639   79073 kubeadm.go:740] duration metric: took 4.197494ms waiting for restarted kubelet to initialise ...
	I0829 19:35:04.127649   79073 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:35:04.132339   79073 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.136884   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.136909   79073 pod_ready.go:82] duration metric: took 4.548897ms for pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.136917   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.136927   79073 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.141014   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "etcd-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.141037   79073 pod_ready.go:82] duration metric: took 4.103179ms for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.141048   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "etcd-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.141062   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.144778   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.144799   79073 pod_ready.go:82] duration metric: took 3.728001ms for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.144807   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.144812   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.255204   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.255227   79073 pod_ready.go:82] duration metric: took 110.408053ms for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.255247   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.255253   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hflpk" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.656086   79073 pod_ready.go:93] pod "kube-proxy-hflpk" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:04.656124   79073 pod_ready.go:82] duration metric: took 400.860776ms for pod "kube-proxy-hflpk" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.656137   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:06.674533   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:09.990963   79869 start.go:364] duration metric: took 3m11.14829615s to acquireMachinesLock for "old-k8s-version-467349"
	I0829 19:35:09.991026   79869 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:09.991035   79869 fix.go:54] fixHost starting: 
	I0829 19:35:09.991429   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:09.991472   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:10.011456   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0829 19:35:10.011867   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:10.012413   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:35:10.012445   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:10.012752   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:10.012960   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:10.013132   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetState
	I0829 19:35:10.014878   79869 fix.go:112] recreateIfNeeded on old-k8s-version-467349: state=Stopped err=<nil>
	I0829 19:35:10.014907   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	W0829 19:35:10.015055   79869 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:10.016684   79869 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467349" ...
	I0829 19:35:08.681559   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.682042   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Found IP for machine: 192.168.50.70
	I0829 19:35:08.682070   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has current primary IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.682080   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Reserving static IP address...
	I0829 19:35:08.682524   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Reserved static IP address: 192.168.50.70
	I0829 19:35:08.682564   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-672127", mac: "52:54:00:db:a8:cf", ip: "192.168.50.70"} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.682580   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for SSH to be available...
	I0829 19:35:08.682609   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | skip adding static IP to network mk-default-k8s-diff-port-672127 - found existing host DHCP lease matching {name: "default-k8s-diff-port-672127", mac: "52:54:00:db:a8:cf", ip: "192.168.50.70"}
	I0829 19:35:08.682623   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Getting to WaitForSSH function...
	I0829 19:35:08.684466   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.684816   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.684876   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.684957   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Using SSH client type: external
	I0829 19:35:08.684982   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa (-rw-------)
	I0829 19:35:08.685032   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:08.685053   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | About to run SSH command:
	I0829 19:35:08.685069   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | exit 0
	I0829 19:35:08.806174   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:08.806493   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetConfigRaw
	I0829 19:35:08.807134   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:08.809574   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.809900   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.809924   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.810227   79559 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/config.json ...
	I0829 19:35:08.810457   79559 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:08.810478   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:08.810675   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:08.812964   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.813337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.813368   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.813620   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:08.813815   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.813994   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.814161   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:08.814338   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:08.814533   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:08.814544   79559 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:08.914370   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:08.914415   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:08.914742   79559 buildroot.go:166] provisioning hostname "default-k8s-diff-port-672127"
	I0829 19:35:08.914782   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:08.914975   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:08.918471   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.918829   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.918857   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.919021   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:08.919186   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.919373   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.919483   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:08.919664   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:08.919865   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:08.919884   79559 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-672127 && echo "default-k8s-diff-port-672127" | sudo tee /etc/hostname
	I0829 19:35:09.032573   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-672127
	
	I0829 19:35:09.032606   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.035434   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.035811   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.035840   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.035999   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.036182   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.036350   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.036465   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.036651   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.036833   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.036852   79559 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-672127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-672127/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-672127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:09.142908   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:09.142937   79559 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:09.142978   79559 buildroot.go:174] setting up certificates
	I0829 19:35:09.142995   79559 provision.go:84] configureAuth start
	I0829 19:35:09.143010   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:09.143258   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:09.145947   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.146313   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.146339   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.146460   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.148631   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.148953   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.148978   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.149128   79559 provision.go:143] copyHostCerts
	I0829 19:35:09.149188   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:09.149204   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:09.149261   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:09.149368   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:09.149378   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:09.149400   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:09.149492   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:09.149501   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:09.149520   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:09.149578   79559 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-672127 san=[127.0.0.1 192.168.50.70 default-k8s-diff-port-672127 localhost minikube]
	I0829 19:35:09.370220   79559 provision.go:177] copyRemoteCerts
	I0829 19:35:09.370277   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:09.370301   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.373233   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.373723   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.373756   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.373966   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.374180   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.374342   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.374496   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.457104   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:35:09.481139   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:09.504611   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 19:35:09.529597   79559 provision.go:87] duration metric: took 386.586301ms to configureAuth
	I0829 19:35:09.529628   79559 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:09.529887   79559 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:35:09.529989   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.532809   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.533309   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.533342   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.533509   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.533743   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.533965   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.534169   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.534372   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.534523   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.534545   79559 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:09.754724   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:09.754752   79559 machine.go:96] duration metric: took 944.279776ms to provisionDockerMachine
	I0829 19:35:09.754766   79559 start.go:293] postStartSetup for "default-k8s-diff-port-672127" (driver="kvm2")
	I0829 19:35:09.754781   79559 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:09.754807   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.755236   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:09.755270   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.757713   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.758079   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.758125   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.758274   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.758466   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.758682   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.758823   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.841022   79559 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:09.846051   79559 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:09.846081   79559 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:09.846163   79559 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:09.846254   79559 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:09.846379   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:09.857443   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:09.884662   79559 start.go:296] duration metric: took 129.87923ms for postStartSetup
	I0829 19:35:09.884715   79559 fix.go:56] duration metric: took 19.789853711s for fixHost
	I0829 19:35:09.884739   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.888011   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.888562   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.888593   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.888789   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.888976   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.889188   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.889347   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.889533   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.889723   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.889736   79559 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:09.990749   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960109.967111721
	
	I0829 19:35:09.990772   79559 fix.go:216] guest clock: 1724960109.967111721
	I0829 19:35:09.990782   79559 fix.go:229] Guest: 2024-08-29 19:35:09.967111721 +0000 UTC Remote: 2024-08-29 19:35:09.884720437 +0000 UTC m=+231.415600706 (delta=82.391284ms)
	I0829 19:35:09.990835   79559 fix.go:200] guest clock delta is within tolerance: 82.391284ms
	I0829 19:35:09.990846   79559 start.go:83] releasing machines lock for "default-k8s-diff-port-672127", held for 19.896020367s
	I0829 19:35:09.990891   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.991180   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:09.994076   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.994434   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.994459   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.994613   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995121   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995318   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995407   79559 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:09.995464   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.995531   79559 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:09.995569   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.998302   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998673   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.998703   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998732   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.998750   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998832   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.998976   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.999026   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.999109   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.999162   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.999249   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.999404   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.999395   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:10.124503   79559 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:10.130734   79559 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:10.275859   79559 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:10.281662   79559 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:10.281728   79559 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:10.297464   79559 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:10.297488   79559 start.go:495] detecting cgroup driver to use...
	I0829 19:35:10.297553   79559 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:10.316686   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:10.332836   79559 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:10.332880   79559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:10.347021   79559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:10.364479   79559 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:10.506136   79559 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:10.659246   79559 docker.go:233] disabling docker service ...
	I0829 19:35:10.659324   79559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:10.678953   79559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:10.694844   79559 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:10.837509   79559 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:10.976512   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:10.993421   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:11.013434   79559 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:35:11.013492   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.023909   79559 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:11.023980   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.038560   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.049911   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.060235   79559 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:11.076772   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.093357   79559 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.110140   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.121770   79559 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:11.131641   79559 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:11.131697   79559 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:11.151460   79559 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:11.161320   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:11.286180   79559 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:11.382235   79559 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:11.382312   79559 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:11.388226   79559 start.go:563] Will wait 60s for crictl version
	I0829 19:35:11.388299   79559 ssh_runner.go:195] Run: which crictl
	I0829 19:35:11.391832   79559 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:11.429509   79559 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:11.429601   79559 ssh_runner.go:195] Run: crio --version
	I0829 19:35:11.457180   79559 ssh_runner.go:195] Run: crio --version
	I0829 19:35:11.487106   79559 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:35:11.488483   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:11.491607   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:11.491988   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:11.492027   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:11.492316   79559 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:11.496448   79559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:11.512045   79559 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-672127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:11.512159   79559 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:35:11.512219   79559 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:11.549212   79559 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:35:11.549287   79559 ssh_runner.go:195] Run: which lz4
	I0829 19:35:11.554151   79559 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:11.558691   79559 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:11.558718   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:35:12.826290   79559 crio.go:462] duration metric: took 1.272173781s to copy over tarball
	I0829 19:35:12.826387   79559 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:10.017965   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .Start
	I0829 19:35:10.018195   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring networks are active...
	I0829 19:35:10.018992   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network default is active
	I0829 19:35:10.019360   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network mk-old-k8s-version-467349 is active
	I0829 19:35:10.019708   79869 main.go:141] libmachine: (old-k8s-version-467349) Getting domain xml...
	I0829 19:35:10.020408   79869 main.go:141] libmachine: (old-k8s-version-467349) Creating domain...
	I0829 19:35:11.298443   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting to get IP...
	I0829 19:35:11.299521   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.300063   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.300152   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.300048   80714 retry.go:31] will retry after 253.519755ms: waiting for machine to come up
	I0829 19:35:11.555694   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.556242   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.556274   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.556187   80714 retry.go:31] will retry after 375.22671ms: waiting for machine to come up
	I0829 19:35:11.932780   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.933206   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.933233   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.933176   80714 retry.go:31] will retry after 329.139276ms: waiting for machine to come up
	I0829 19:35:12.263804   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.264471   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.264501   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.264437   80714 retry.go:31] will retry after 434.457682ms: waiting for machine to come up
	I0829 19:35:12.701184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.701773   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.701805   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.701691   80714 retry.go:31] will retry after 555.961608ms: waiting for machine to come up
	I0829 19:35:13.259670   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:13.260159   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:13.260184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:13.260080   80714 retry.go:31] will retry after 814.491179ms: waiting for machine to come up
	I0829 19:35:09.162551   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:11.165654   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:13.662027   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:15.034221   79559 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.207800368s)
	I0829 19:35:15.034254   79559 crio.go:469] duration metric: took 2.207935139s to extract the tarball
	I0829 19:35:15.034263   79559 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:15.070411   79559 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:15.117649   79559 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:35:15.117675   79559 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:35:15.117684   79559 kubeadm.go:934] updating node { 192.168.50.70 8444 v1.31.0 crio true true} ...
	I0829 19:35:15.117793   79559 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-672127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:15.117873   79559 ssh_runner.go:195] Run: crio config
	I0829 19:35:15.161749   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:35:15.161778   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:15.161795   79559 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:15.161815   79559 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-672127 NodeName:default-k8s-diff-port-672127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:35:15.161949   79559 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-672127"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:15.162002   79559 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:35:15.171789   79559 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:15.171858   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:15.181011   79559 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0829 19:35:15.197394   79559 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:15.213309   79559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0829 19:35:15.231088   79559 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:15.234732   79559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:15.245700   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:15.368430   79559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:15.385792   79559 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127 for IP: 192.168.50.70
	I0829 19:35:15.385820   79559 certs.go:194] generating shared ca certs ...
	I0829 19:35:15.385844   79559 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:15.386020   79559 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:15.386108   79559 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:15.386123   79559 certs.go:256] generating profile certs ...
	I0829 19:35:15.386240   79559 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/client.key
	I0829 19:35:15.386324   79559 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.key.828c23de
	I0829 19:35:15.386378   79559 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.key
	I0829 19:35:15.386523   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:15.386567   79559 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:15.386582   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:15.386615   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:15.386650   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:15.386680   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:15.386736   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:15.387663   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:15.429474   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:15.470861   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:15.514906   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:15.552474   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 19:35:15.581749   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:15.605874   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:15.629703   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:35:15.653589   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:15.680222   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:15.706824   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:15.733354   79559 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:15.753069   79559 ssh_runner.go:195] Run: openssl version
	I0829 19:35:15.759905   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:15.770507   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.776103   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.776159   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.783674   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:15.797519   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:15.809517   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.814243   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.814311   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.819834   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:15.830130   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:15.840473   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.844974   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.845033   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.850619   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:15.860955   79559 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:15.865359   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:15.871149   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:15.876982   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:15.882635   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:15.888020   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:15.893423   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:15.898989   79559 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-672127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:15.899085   79559 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:15.899156   79559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:15.939743   79559 cri.go:89] found id: ""
	I0829 19:35:15.939817   79559 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:15.949877   79559 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:15.949896   79559 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:15.949938   79559 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:15.959436   79559 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:15.960417   79559 kubeconfig.go:125] found "default-k8s-diff-port-672127" server: "https://192.168.50.70:8444"
	I0829 19:35:15.962469   79559 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:15.971672   79559 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0829 19:35:15.971700   79559 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:15.971710   79559 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:15.971777   79559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:16.015084   79559 cri.go:89] found id: ""
	I0829 19:35:16.015173   79559 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:16.031614   79559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:16.044359   79559 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:16.044384   79559 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:16.044448   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 19:35:16.056073   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:16.056139   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:16.066426   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 19:35:16.075300   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:16.075368   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:16.084795   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 19:35:16.093739   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:16.093804   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:16.103539   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 19:35:16.112676   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:16.112744   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:16.121997   79559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:16.134461   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:16.246853   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.577230   79559 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.330337638s)
	I0829 19:35:17.577271   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.810593   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.892546   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.993500   79559 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:17.993595   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:18.494169   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:14.076091   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.076599   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.076622   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.076549   80714 retry.go:31] will retry after 864.469682ms: waiting for machine to come up
	I0829 19:35:14.942675   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.943123   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.943154   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.943068   80714 retry.go:31] will retry after 1.062037578s: waiting for machine to come up
	I0829 19:35:16.006750   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:16.007301   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:16.007336   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:16.007212   80714 retry.go:31] will retry after 1.22747505s: waiting for machine to come up
	I0829 19:35:17.236788   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:17.237262   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:17.237291   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:17.237216   80714 retry.go:31] will retry after 1.663870598s: waiting for machine to come up
	I0829 19:35:15.662198   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:16.162890   79073 pod_ready.go:93] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:16.162919   79073 pod_ready.go:82] duration metric: took 11.506772145s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:16.162931   79073 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:18.170586   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:18.994574   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:19.493764   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:19.509384   79559 api_server.go:72] duration metric: took 1.515882118s to wait for apiserver process to appear ...
	I0829 19:35:19.509415   79559 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:35:19.509440   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:21.555577   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:21.555625   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:21.555642   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:21.572445   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:21.572481   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:22.009612   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:22.017592   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:22.017627   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:22.510148   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:22.516104   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:22.516140   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:23.009648   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:23.016342   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 200:
	ok
	I0829 19:35:23.022852   79559 api_server.go:141] control plane version: v1.31.0
	I0829 19:35:23.022878   79559 api_server.go:131] duration metric: took 3.513455745s to wait for apiserver health ...
	I0829 19:35:23.022889   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:35:23.022897   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:23.024557   79559 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:35:23.025764   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:35:23.035743   79559 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:35:23.075272   79559 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:35:23.091948   79559 system_pods.go:59] 8 kube-system pods found
	I0829 19:35:23.091991   79559 system_pods.go:61] "coredns-6f6b679f8f-p92hj" [736e7c46-b945-445f-a404-20a609f766e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:35:23.092004   79559 system_pods.go:61] "etcd-default-k8s-diff-port-672127" [cf016602-46cd-4972-bdd3-1ef5d881b6e0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:35:23.092014   79559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-672127" [eb51ac87-f5e4-4031-84fe-811da2ff8d63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:35:23.092026   79559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-672127" [caf7b777-935f-4351-b58d-60bb8175bec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:35:23.092034   79559 system_pods.go:61] "kube-proxy-tlc89" [9a11e5a6-b624-494b-8e94-d362b94fb98e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 19:35:23.092043   79559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-672127" [fe83e2af-b046-4d56-9b5c-d7a17db7e854] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:35:23.092053   79559 system_pods.go:61] "metrics-server-6867b74b74-tbkxg" [6d8f8c92-4f89-4a2a-8690-51a850768516] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:35:23.092065   79559 system_pods.go:61] "storage-provisioner" [7349bb79-c402-4587-ab0b-e52e5d455c61] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:35:23.092078   79559 system_pods.go:74] duration metric: took 16.779413ms to wait for pod list to return data ...
	I0829 19:35:23.092091   79559 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:35:23.099492   79559 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:35:23.099533   79559 node_conditions.go:123] node cpu capacity is 2
	I0829 19:35:23.099547   79559 node_conditions.go:105] duration metric: took 7.450351ms to run NodePressure ...
	I0829 19:35:23.099571   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:23.371279   79559 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:35:23.377322   79559 kubeadm.go:739] kubelet initialised
	I0829 19:35:23.377346   79559 kubeadm.go:740] duration metric: took 6.045074ms waiting for restarted kubelet to initialise ...
	I0829 19:35:23.377353   79559 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:35:23.384232   79559 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.391931   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.391960   79559 pod_ready.go:82] duration metric: took 7.702072ms for pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.391971   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.391980   79559 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.396708   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.396728   79559 pod_ready.go:82] duration metric: took 4.739691ms for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.396736   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.396744   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.401274   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.401298   79559 pod_ready.go:82] duration metric: took 4.546455ms for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.401308   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.401314   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:18.903082   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:18.903668   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:18.903691   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:18.903624   80714 retry.go:31] will retry after 2.012998698s: waiting for machine to come up
	I0829 19:35:20.918657   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:20.919143   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:20.919179   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:20.919066   80714 retry.go:31] will retry after 2.674645507s: waiting for machine to come up
	I0829 19:35:23.595218   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:23.595658   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:23.595685   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:23.595633   80714 retry.go:31] will retry after 3.052784769s: waiting for machine to come up
	I0829 19:35:20.670356   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:22.670699   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.786910   78865 start.go:364] duration metric: took 49.670356886s to acquireMachinesLock for "no-preload-690795"
	I0829 19:35:27.786963   78865 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:27.786975   78865 fix.go:54] fixHost starting: 
	I0829 19:35:27.787377   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:27.787425   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:27.803558   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0829 19:35:27.803903   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:27.804328   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:35:27.804348   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:27.804623   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:27.804824   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:27.804967   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:35:27.806332   78865 fix.go:112] recreateIfNeeded on no-preload-690795: state=Stopped err=<nil>
	I0829 19:35:27.806353   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	W0829 19:35:27.806525   78865 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:27.808678   78865 out.go:177] * Restarting existing kvm2 VM for "no-preload-690795" ...
	I0829 19:35:25.407622   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.910410   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:26.649643   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650117   79869 main.go:141] libmachine: (old-k8s-version-467349) Found IP for machine: 192.168.72.112
	I0829 19:35:26.650146   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserving static IP address...
	I0829 19:35:26.650161   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has current primary IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650553   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.650579   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserved static IP address: 192.168.72.112
	I0829 19:35:26.650600   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | skip adding static IP to network mk-old-k8s-version-467349 - found existing host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"}
	I0829 19:35:26.650611   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting for SSH to be available...
	I0829 19:35:26.650640   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Getting to WaitForSSH function...
	I0829 19:35:26.653157   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653509   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.653528   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653667   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH client type: external
	I0829 19:35:26.653690   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa (-rw-------)
	I0829 19:35:26.653724   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:26.653741   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | About to run SSH command:
	I0829 19:35:26.653755   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | exit 0
	I0829 19:35:26.778126   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:26.778436   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetConfigRaw
	I0829 19:35:26.779002   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:26.781392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.781745   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.781778   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.782006   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:35:26.782229   79869 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:26.782249   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:26.782509   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.784806   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785130   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.785148   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785300   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.785462   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785611   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785799   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.785923   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.786126   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.786138   79869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:26.886223   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:26.886256   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886522   79869 buildroot.go:166] provisioning hostname "old-k8s-version-467349"
	I0829 19:35:26.886563   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886756   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.889874   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890304   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.890324   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890471   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.890655   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890821   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890969   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.891131   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.891333   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.891348   79869 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467349 && echo "old-k8s-version-467349" | sudo tee /etc/hostname
	I0829 19:35:27.007493   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467349
	
	I0829 19:35:27.007535   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.010202   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010526   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.010548   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010737   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.010913   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011080   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011225   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.011395   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.011548   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.011564   79869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467349/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:27.123357   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:27.123385   79869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:27.123436   79869 buildroot.go:174] setting up certificates
	I0829 19:35:27.123445   79869 provision.go:84] configureAuth start
	I0829 19:35:27.123455   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:27.123760   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.126486   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.126819   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.126857   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.127013   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.129089   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129404   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.129429   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129554   79869 provision.go:143] copyHostCerts
	I0829 19:35:27.129614   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:27.129636   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:27.129704   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:27.129825   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:27.129840   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:27.129871   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:27.129946   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:27.129956   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:27.129982   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:27.130043   79869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467349 san=[127.0.0.1 192.168.72.112 localhost minikube old-k8s-version-467349]
	I0829 19:35:27.190556   79869 provision.go:177] copyRemoteCerts
	I0829 19:35:27.190610   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:27.190667   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.193785   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194205   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.194243   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194406   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.194620   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.194788   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.194962   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.276099   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:27.299820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 19:35:27.323625   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:27.347943   79869 provision.go:87] duration metric: took 224.487094ms to configureAuth
	I0829 19:35:27.347970   79869 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:27.348140   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:35:27.348203   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.351042   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.351420   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351654   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.351860   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352030   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352159   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.352321   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.352487   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.352504   79869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:27.565849   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:27.565874   79869 machine.go:96] duration metric: took 783.631791ms to provisionDockerMachine
	I0829 19:35:27.565886   79869 start.go:293] postStartSetup for "old-k8s-version-467349" (driver="kvm2")
	I0829 19:35:27.565897   79869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:27.565935   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.566274   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:27.566332   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.568900   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569225   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.569258   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569424   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.569613   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.569795   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.569961   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.648057   79869 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:27.651955   79869 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:27.651984   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:27.652057   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:27.652167   79869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:27.652311   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:27.660961   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:27.684179   79869 start.go:296] duration metric: took 118.281042ms for postStartSetup
	I0829 19:35:27.684251   79869 fix.go:56] duration metric: took 17.69321583s for fixHost
	I0829 19:35:27.684277   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.686877   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687235   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.687266   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687429   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.687615   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687751   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687863   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.687994   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.688202   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.688220   79869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:27.786754   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960127.745017542
	
	I0829 19:35:27.786773   79869 fix.go:216] guest clock: 1724960127.745017542
	I0829 19:35:27.786780   79869 fix.go:229] Guest: 2024-08-29 19:35:27.745017542 +0000 UTC Remote: 2024-08-29 19:35:27.684258077 +0000 UTC m=+208.981895804 (delta=60.759465ms)
	I0829 19:35:27.786798   79869 fix.go:200] guest clock delta is within tolerance: 60.759465ms
	I0829 19:35:27.786803   79869 start.go:83] releasing machines lock for "old-k8s-version-467349", held for 17.795804036s
	I0829 19:35:27.786823   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.787066   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.789617   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.789937   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.789967   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.790124   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790514   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790689   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790781   79869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:27.790827   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.790912   79869 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:27.790937   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.793406   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793495   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793732   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793762   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793781   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793821   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793910   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794075   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794076   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794242   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794419   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.794435   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794646   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794811   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.910665   79869 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:27.916917   79869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:28.063525   79869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:28.070848   79869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:28.070907   79869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:28.089204   79869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:28.089226   79869 start.go:495] detecting cgroup driver to use...
	I0829 19:35:28.089291   79869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:28.108528   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:28.122248   79869 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:28.122353   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:28.143014   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:28.159322   79869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:28.281356   79869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:28.445101   79869 docker.go:233] disabling docker service ...
	I0829 19:35:28.445162   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:28.460437   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:28.474849   79869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:28.609747   79869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:28.734733   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:25.170397   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.669465   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:28.748605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:28.766945   79869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 19:35:28.767014   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.776535   79869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:28.776598   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.787050   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.797552   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.807575   79869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:28.818319   79869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:28.827289   79869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:28.827342   79869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:28.839995   79869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:28.849779   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:28.979701   79869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:29.092264   79869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:29.092344   79869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:29.097310   79869 start.go:563] Will wait 60s for crictl version
	I0829 19:35:29.097366   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:29.101080   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:29.146142   79869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:29.146228   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.176037   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.210024   79869 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 19:35:27.810111   78865 main.go:141] libmachine: (no-preload-690795) Calling .Start
	I0829 19:35:27.810300   78865 main.go:141] libmachine: (no-preload-690795) Ensuring networks are active...
	I0829 19:35:27.811063   78865 main.go:141] libmachine: (no-preload-690795) Ensuring network default is active
	I0829 19:35:27.811464   78865 main.go:141] libmachine: (no-preload-690795) Ensuring network mk-no-preload-690795 is active
	I0829 19:35:27.811848   78865 main.go:141] libmachine: (no-preload-690795) Getting domain xml...
	I0829 19:35:27.812590   78865 main.go:141] libmachine: (no-preload-690795) Creating domain...
	I0829 19:35:29.131821   78865 main.go:141] libmachine: (no-preload-690795) Waiting to get IP...
	I0829 19:35:29.132876   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.133519   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.133595   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.133481   80876 retry.go:31] will retry after 252.123266ms: waiting for machine to come up
	I0829 19:35:29.387046   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.387534   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.387561   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.387496   80876 retry.go:31] will retry after 304.157394ms: waiting for machine to come up
	I0829 19:35:29.693891   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.694581   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.694603   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.694560   80876 retry.go:31] will retry after 366.980614ms: waiting for machine to come up
	I0829 19:35:30.063032   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:30.063466   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:30.063504   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:30.063431   80876 retry.go:31] will retry after 562.46082ms: waiting for machine to come up
	I0829 19:35:30.412868   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:32.908366   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:33.408823   79559 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.408848   79559 pod_ready.go:82] duration metric: took 10.007525744s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.408862   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tlc89" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.418176   79559 pod_ready.go:93] pod "kube-proxy-tlc89" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.418202   79559 pod_ready.go:82] duration metric: took 9.33136ms for pod "kube-proxy-tlc89" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.418214   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.424362   79559 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.424388   79559 pod_ready.go:82] duration metric: took 6.165646ms for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.424401   79559 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:29.211072   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:29.214489   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.214897   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:29.214932   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.215196   79869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:29.219742   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:29.233815   79869 kubeadm.go:883] updating cluster {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:29.233934   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:35:29.233994   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:29.281512   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:29.281579   79869 ssh_runner.go:195] Run: which lz4
	I0829 19:35:29.285825   79869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:29.290303   79869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:29.290349   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 19:35:30.843642   79869 crio.go:462] duration metric: took 1.557868582s to copy over tarball
	I0829 19:35:30.843714   79869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:29.670803   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:32.171154   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:30.627531   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:30.628123   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:30.628147   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:30.628030   80876 retry.go:31] will retry after 488.97189ms: waiting for machine to come up
	I0829 19:35:31.118901   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:31.119457   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:31.119480   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:31.119398   80876 retry.go:31] will retry after 801.189699ms: waiting for machine to come up
	I0829 19:35:31.921939   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:31.922447   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:31.922482   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:31.922391   80876 retry.go:31] will retry after 828.788864ms: waiting for machine to come up
	I0829 19:35:32.752986   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:32.753429   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:32.753465   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:32.753385   80876 retry.go:31] will retry after 1.404436811s: waiting for machine to come up
	I0829 19:35:34.159129   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:34.159714   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:34.159741   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:34.159678   80876 retry.go:31] will retry after 1.312099391s: waiting for machine to come up
	I0829 19:35:35.473045   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:35.473510   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:35.473549   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:35.473461   80876 retry.go:31] will retry after 1.46129368s: waiting for machine to come up
	I0829 19:35:35.431524   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:37.437993   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:33.827965   79869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984226389s)
	I0829 19:35:33.827993   79869 crio.go:469] duration metric: took 2.98432047s to extract the tarball
	I0829 19:35:33.828004   79869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:33.869606   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:33.902753   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:33.902782   79869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:33.902862   79869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.902867   79869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.902869   79869 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.902882   79869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:33.903054   79869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.903000   79869 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 19:35:33.902955   79869 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.902978   79869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.904938   79869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904960   79869 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 19:35:33.904917   79869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.904920   79869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.159604   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 19:35:34.195935   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.208324   79869 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 19:35:34.208373   79869 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 19:35:34.208414   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.229776   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.231728   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.241303   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.243523   79869 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 19:35:34.243572   79869 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.243589   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.243612   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.256377   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.291584   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.339295   79869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 19:35:34.339344   79869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.339396   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364510   79869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 19:35:34.364559   79869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.364565   79869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 19:35:34.364598   79869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.364608   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364636   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.364641   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.364642   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.370545   79869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 19:35:34.370580   79869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.370621   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.401578   79869 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 19:35:34.401628   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.401634   79869 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.401651   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.401669   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.452408   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.452472   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.452530   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.452479   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.498680   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.502698   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.502722   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.608235   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.608332   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 19:35:34.608345   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.608302   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.647702   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.647744   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.647784   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.771634   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.771691   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 19:35:34.771642   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.771742   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 19:35:34.771818   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 19:35:34.790517   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.826666   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 19:35:34.832449   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 19:35:34.850172   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 19:35:35.112084   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:35.251873   79869 cache_images.go:92] duration metric: took 1.34907399s to LoadCachedImages
	W0829 19:35:35.251967   79869 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0829 19:35:35.251984   79869 kubeadm.go:934] updating node { 192.168.72.112 8443 v1.20.0 crio true true} ...
	I0829 19:35:35.252130   79869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467349 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:35.252215   79869 ssh_runner.go:195] Run: crio config
	I0829 19:35:35.307174   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:35:35.307205   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:35.307229   79869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:35.307253   79869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467349 NodeName:old-k8s-version-467349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 19:35:35.307421   79869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467349"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:35.307498   79869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 19:35:35.317493   79869 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:35.317574   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:35.327102   79869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 19:35:35.343936   79869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:35.362420   79869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 19:35:35.379862   79869 ssh_runner.go:195] Run: grep 192.168.72.112	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:35.383595   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:35.396175   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:35.513069   79869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:35.535454   79869 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349 for IP: 192.168.72.112
	I0829 19:35:35.535481   79869 certs.go:194] generating shared ca certs ...
	I0829 19:35:35.535500   79869 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:35.535693   79869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:35.535751   79869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:35.535764   79869 certs.go:256] generating profile certs ...
	I0829 19:35:35.535885   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.key
	I0829 19:35:35.535962   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key.b97fdb0f
	I0829 19:35:35.536010   79869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key
	I0829 19:35:35.536160   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:35.536198   79869 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:35.536212   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:35.536255   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:35.536289   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:35.536345   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:35.536403   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:35.537270   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:35.573137   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:35.605232   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:35.633800   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:35.681773   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 19:35:35.711207   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:35.748040   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:35.774144   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:35:35.805029   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:35.833761   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:35.856820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:35.883402   79869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:35.902258   79869 ssh_runner.go:195] Run: openssl version
	I0829 19:35:35.908223   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:35.919106   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923368   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923414   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.930431   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:35.941856   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:35.953186   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957279   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957351   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.963886   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:35.976058   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:35.986836   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991417   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991482   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.997160   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:36.009731   79869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:36.015343   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:36.022897   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:36.028976   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:36.036658   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:36.042513   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:36.048085   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:36.053863   79869 kubeadm.go:392] StartCluster: {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:36.053944   79869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:36.053999   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.099158   79869 cri.go:89] found id: ""
	I0829 19:35:36.099230   79869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:36.109678   79869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:36.109701   79869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:36.109751   79869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:36.119674   79869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:36.120829   79869 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-467349" does not appear in /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:35:36.121495   79869 kubeconfig.go:62] /home/jenkins/minikube-integration/19531-13056/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-467349" cluster setting kubeconfig missing "old-k8s-version-467349" context setting]
	I0829 19:35:36.122505   79869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:36.221053   79869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:36.232505   79869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.112
	I0829 19:35:36.232550   79869 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:36.232562   79869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:36.232612   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.272228   79869 cri.go:89] found id: ""
	I0829 19:35:36.272290   79869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:36.290945   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:36.301665   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:36.301688   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:36.301740   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:35:36.311828   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:36.311882   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:36.322539   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:35:36.331879   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:36.331947   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:36.343057   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.352806   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:36.352867   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.362158   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:35:36.372280   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:36.372355   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:36.383178   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:36.393699   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:36.514064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.332360   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.570906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.665203   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.764043   79869 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:37.764146   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:38.264990   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:34.172082   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:36.669124   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:38.669696   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:36.936034   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:36.936510   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:36.936539   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:36.936463   80876 retry.go:31] will retry after 1.943807762s: waiting for machine to come up
	I0829 19:35:38.881644   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:38.882110   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:38.882133   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:38.882067   80876 retry.go:31] will retry after 3.173912619s: waiting for machine to come up
	I0829 19:35:39.932725   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:42.429439   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:38.764741   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.264314   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.765085   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.264910   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.264207   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.764841   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.265060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.764958   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:43.264971   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.168816   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:43.669594   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:42.059140   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:42.059668   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:42.059692   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:42.059602   80876 retry.go:31] will retry after 4.193427915s: waiting for machine to come up
	I0829 19:35:44.430473   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:46.431149   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:43.764674   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.264893   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.764345   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.264234   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.764985   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.265107   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.764222   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.264350   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.764787   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:48.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.671012   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:48.168836   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:46.256270   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.256783   78865 main.go:141] libmachine: (no-preload-690795) Found IP for machine: 192.168.39.76
	I0829 19:35:46.256806   78865 main.go:141] libmachine: (no-preload-690795) Reserving static IP address...
	I0829 19:35:46.256822   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has current primary IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.257249   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "no-preload-690795", mac: "52:54:00:2b:48:ed", ip: "192.168.39.76"} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.257274   78865 main.go:141] libmachine: (no-preload-690795) Reserved static IP address: 192.168.39.76
	I0829 19:35:46.257289   78865 main.go:141] libmachine: (no-preload-690795) DBG | skip adding static IP to network mk-no-preload-690795 - found existing host DHCP lease matching {name: "no-preload-690795", mac: "52:54:00:2b:48:ed", ip: "192.168.39.76"}
	I0829 19:35:46.257299   78865 main.go:141] libmachine: (no-preload-690795) Waiting for SSH to be available...
	I0829 19:35:46.257313   78865 main.go:141] libmachine: (no-preload-690795) DBG | Getting to WaitForSSH function...
	I0829 19:35:46.259334   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.259664   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.259692   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.259788   78865 main.go:141] libmachine: (no-preload-690795) DBG | Using SSH client type: external
	I0829 19:35:46.259821   78865 main.go:141] libmachine: (no-preload-690795) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa (-rw-------)
	I0829 19:35:46.259859   78865 main.go:141] libmachine: (no-preload-690795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:46.259871   78865 main.go:141] libmachine: (no-preload-690795) DBG | About to run SSH command:
	I0829 19:35:46.259902   78865 main.go:141] libmachine: (no-preload-690795) DBG | exit 0
	I0829 19:35:46.389869   78865 main.go:141] libmachine: (no-preload-690795) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:46.390295   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetConfigRaw
	I0829 19:35:46.390987   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:46.393890   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.394310   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.394342   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.394673   78865 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/config.json ...
	I0829 19:35:46.394846   78865 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:46.394869   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:46.395082   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.397203   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.397508   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.397535   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.397676   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.397862   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.398011   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.398178   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.398314   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.398475   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.398486   78865 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:46.502132   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:46.502163   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.502426   78865 buildroot.go:166] provisioning hostname "no-preload-690795"
	I0829 19:35:46.502449   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.502642   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.505084   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.505414   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.505443   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.505665   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.505861   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.506035   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.506219   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.506379   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.506573   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.506597   78865 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-690795 && echo "no-preload-690795" | sudo tee /etc/hostname
	I0829 19:35:46.627246   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-690795
	
	I0829 19:35:46.627269   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.630081   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.630430   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.630454   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.630611   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.630780   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.630947   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.631233   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.631397   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.631545   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.631568   78865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-690795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-690795/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-690795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:46.746055   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:46.746106   78865 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:46.746131   78865 buildroot.go:174] setting up certificates
	I0829 19:35:46.746143   78865 provision.go:84] configureAuth start
	I0829 19:35:46.746160   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.746411   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:46.749125   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.749476   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.749497   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.749642   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.751828   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.752178   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.752203   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.752317   78865 provision.go:143] copyHostCerts
	I0829 19:35:46.752384   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:46.752404   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:46.752475   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:46.752580   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:46.752591   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:46.752619   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:46.752693   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:46.752703   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:46.752728   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:46.752791   78865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.no-preload-690795 san=[127.0.0.1 192.168.39.76 localhost minikube no-preload-690795]
	I0829 19:35:46.901689   78865 provision.go:177] copyRemoteCerts
	I0829 19:35:46.901744   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:46.901764   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.904873   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.905241   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.905287   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.905458   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.905657   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.905805   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.905960   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:46.988181   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:47.011149   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 19:35:47.034849   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:47.057375   78865 provision.go:87] duration metric: took 311.217634ms to configureAuth
	I0829 19:35:47.057402   78865 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:47.057599   78865 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:35:47.057695   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.060274   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.060594   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.060620   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.060750   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.060976   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.061149   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.061311   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.061465   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:47.061676   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:47.061703   78865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:47.284836   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:47.284862   78865 machine.go:96] duration metric: took 890.004565ms to provisionDockerMachine
	I0829 19:35:47.284876   78865 start.go:293] postStartSetup for "no-preload-690795" (driver="kvm2")
	I0829 19:35:47.284889   78865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:47.284909   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.285207   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:47.285232   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.287875   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.288162   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.288180   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.288391   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.288597   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.288772   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.288899   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.372833   78865 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:47.376649   78865 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:47.376670   78865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:47.376729   78865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:47.376801   78865 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:47.376881   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:47.385721   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:47.407601   78865 start.go:296] duration metric: took 122.711153ms for postStartSetup
	I0829 19:35:47.407640   78865 fix.go:56] duration metric: took 19.620666095s for fixHost
	I0829 19:35:47.407673   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.410483   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.410873   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.410903   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.411139   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.411363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.411527   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.411674   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.411830   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:47.411987   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:47.412001   78865 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:47.518841   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960147.499237123
	
	I0829 19:35:47.518864   78865 fix.go:216] guest clock: 1724960147.499237123
	I0829 19:35:47.518872   78865 fix.go:229] Guest: 2024-08-29 19:35:47.499237123 +0000 UTC Remote: 2024-08-29 19:35:47.407643858 +0000 UTC m=+351.882891548 (delta=91.593265ms)
	I0829 19:35:47.518891   78865 fix.go:200] guest clock delta is within tolerance: 91.593265ms
	I0829 19:35:47.518896   78865 start.go:83] releasing machines lock for "no-preload-690795", held for 19.731957743s
	I0829 19:35:47.518914   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.519214   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:47.521738   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.522125   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.522153   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.522310   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.522806   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.523016   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.523082   78865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:47.523127   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.523209   78865 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:47.523225   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.526076   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526443   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.526462   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526489   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526681   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.526826   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.527005   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.527036   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.527073   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.527199   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.527197   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.527370   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.527537   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.527690   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.635450   78865 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:47.641274   78865 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:47.788805   78865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:47.794545   78865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:47.794601   78865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:47.810156   78865 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:47.810175   78865 start.go:495] detecting cgroup driver to use...
	I0829 19:35:47.810228   78865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:47.825795   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:47.839011   78865 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:47.839061   78865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:47.851854   78865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:47.864467   78865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:47.999155   78865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:48.143858   78865 docker.go:233] disabling docker service ...
	I0829 19:35:48.143921   78865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:48.157740   78865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:48.172067   78865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:48.339557   78865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:48.462950   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:48.475646   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:48.492262   78865 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:35:48.492329   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.501580   78865 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:48.501647   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.511241   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.520477   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.530413   78865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:48.540457   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.551258   78865 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.567365   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.577266   78865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:48.586423   78865 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:48.586479   78865 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:48.599527   78865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:48.608666   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:48.721808   78865 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:48.811417   78865 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:48.811495   78865 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:48.816689   78865 start.go:563] Will wait 60s for crictl version
	I0829 19:35:48.816750   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:48.820563   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:48.862786   78865 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:48.862869   78865 ssh_runner.go:195] Run: crio --version
	I0829 19:35:48.889834   78865 ssh_runner.go:195] Run: crio --version
	I0829 19:35:48.918515   78865 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:35:48.919643   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:48.922182   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:48.922530   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:48.922560   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:48.922725   78865 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:48.926877   78865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:48.939254   78865 kubeadm.go:883] updating cluster {Name:no-preload-690795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:48.939379   78865 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:35:48.939413   78865 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:48.972281   78865 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:35:48.972304   78865 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:48.972345   78865 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.972361   78865 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:48.972384   78865 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:48.972425   78865 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:48.972443   78865 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:48.972452   78865 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 19:35:48.972496   78865 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:48.972558   78865 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:48.973929   78865 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:48.973938   78865 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.973979   78865 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 19:35:48.973933   78865 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:48.973931   78865 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:48.973932   78865 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:48.973938   78865 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:48.973939   78865 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.229315   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 19:35:49.232334   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.271261   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.328903   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.339435   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.349057   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.356840   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.387705   78865 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 19:35:49.387748   78865 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 19:35:49.387760   78865 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.387777   78865 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.387808   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.387829   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.389731   78865 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 19:35:49.389769   78865 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.389809   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.438231   78865 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 19:35:49.438264   78865 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.438304   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.453177   78865 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 19:35:49.453220   78865 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.453270   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.455713   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.455767   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.455802   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.455804   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.455772   78865 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 19:35:49.455895   78865 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.455921   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.458141   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.539090   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.539125   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.568605   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.573575   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.573622   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.573575   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.678619   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.680581   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.680584   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.680671   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.699638   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.706556   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.803909   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.809759   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 19:35:49.809863   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.810356   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 19:35:49.810423   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:49.811234   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 19:35:49.811285   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:49.832040   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 19:35:49.832102   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 19:35:49.832153   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:49.832162   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:49.862517   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 19:35:49.862537   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.862578   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.862653   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 19:35:49.862696   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 19:35:49.862703   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 19:35:49.862731   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 19:35:49.862760   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 19:35:49.862788   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:35:50.192890   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.930928   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:50.931805   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:53.430716   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:48.764746   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.264755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.764703   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.264240   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.764284   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.265111   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.764316   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.264213   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.764295   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:53.264451   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.168967   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:52.169327   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:51.820978   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.958376621s)
	I0829 19:35:51.821014   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 19:35:51.821035   78865 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:51.821077   78865 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.958265625s)
	I0829 19:35:51.821109   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:51.821108   78865 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.62819044s)
	I0829 19:35:51.821211   78865 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 19:35:51.821243   78865 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:51.821275   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:51.821111   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 19:35:55.931182   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:58.431477   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:53.764946   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.265076   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.764273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.264844   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.764622   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.765120   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.265199   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.764610   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:58.264296   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.669752   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:56.670764   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:55.594240   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.773093303s)
	I0829 19:35:55.594275   78865 ssh_runner.go:235] Completed: which crictl: (3.77298113s)
	I0829 19:35:55.594290   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 19:35:55.594340   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:55.594348   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:55.594403   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:57.972145   78865 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.377784997s)
	I0829 19:35:57.972180   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.377757134s)
	I0829 19:35:57.972210   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 19:35:57.972223   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:57.972237   78865 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:57.972270   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:58.025853   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:59.843856   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.871560481s)
	I0829 19:35:59.843883   78865 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.818003416s)
	I0829 19:35:59.843887   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 19:35:59.843915   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 19:35:59.843925   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:59.844004   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:35:59.844019   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:59.849625   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 19:36:00.432638   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.078312   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:58.765060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.265033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.765033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.265144   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.764425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.764672   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.264962   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:03.264407   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.170365   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:01.668465   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.670347   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:01.294196   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.450154791s)
	I0829 19:36:01.294230   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 19:36:01.294273   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:36:01.294336   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:36:03.144937   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.850574318s)
	I0829 19:36:03.144978   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 19:36:03.145018   78865 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:36:03.145081   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:36:03.803763   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 19:36:03.803802   78865 cache_images.go:123] Successfully loaded all cached images
	I0829 19:36:03.803807   78865 cache_images.go:92] duration metric: took 14.831492974s to LoadCachedImages
	I0829 19:36:03.803818   78865 kubeadm.go:934] updating node { 192.168.39.76 8443 v1.31.0 crio true true} ...
	I0829 19:36:03.803927   78865 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-690795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:36:03.803988   78865 ssh_runner.go:195] Run: crio config
	I0829 19:36:03.854859   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:36:03.854879   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:36:03.854894   78865 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:36:03.854915   78865 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-690795 NodeName:no-preload-690795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:36:03.855055   78865 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-690795"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.76
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:36:03.855114   78865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:36:03.865163   78865 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:36:03.865236   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:36:03.874348   78865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0829 19:36:03.891540   78865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:36:03.908488   78865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0829 19:36:03.926440   78865 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0829 19:36:03.930270   78865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:36:03.942353   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:36:04.066646   78865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:36:04.083872   78865 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795 for IP: 192.168.39.76
	I0829 19:36:04.083901   78865 certs.go:194] generating shared ca certs ...
	I0829 19:36:04.083921   78865 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:36:04.084106   78865 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:36:04.084172   78865 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:36:04.084186   78865 certs.go:256] generating profile certs ...
	I0829 19:36:04.084307   78865 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/client.key
	I0829 19:36:04.084432   78865 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.key.8a2db174
	I0829 19:36:04.084492   78865 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.key
	I0829 19:36:04.084656   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:36:04.084705   78865 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:36:04.084718   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:36:04.084753   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:36:04.084790   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:36:04.084827   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:36:04.084883   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:36:04.085744   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:36:04.124689   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:36:04.158769   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:36:04.188748   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:36:04.217577   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:36:04.251166   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:36:04.282961   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:36:04.306431   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:36:04.329260   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:36:04.365050   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:36:04.393054   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:36:04.417384   78865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:36:04.434555   78865 ssh_runner.go:195] Run: openssl version
	I0829 19:36:04.440074   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:36:04.451378   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.455603   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.455655   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.461114   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:36:04.472522   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:36:04.483064   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.487316   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.487383   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.492860   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:36:04.504284   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:36:04.515522   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.519853   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.519908   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.525240   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:36:04.536612   78865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:36:04.540905   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:36:04.546622   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:36:04.552303   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:36:04.558306   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:36:04.564129   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:36:04.569635   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:36:04.575196   78865 kubeadm.go:392] StartCluster: {Name:no-preload-690795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:36:04.575279   78865 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:36:04.575360   78865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:36:04.619563   78865 cri.go:89] found id: ""
	I0829 19:36:04.619638   78865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:36:04.629655   78865 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:36:04.629675   78865 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:36:04.629785   78865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:36:04.638771   78865 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:36:04.639763   78865 kubeconfig.go:125] found "no-preload-690795" server: "https://192.168.39.76:8443"
	I0829 19:36:04.641783   78865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:36:04.650605   78865 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.76
	I0829 19:36:04.650634   78865 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:36:04.650644   78865 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:36:04.650693   78865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:36:04.685589   78865 cri.go:89] found id: ""
	I0829 19:36:04.685656   78865 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:36:04.702584   78865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:36:04.711693   78865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:36:04.711712   78865 kubeadm.go:157] found existing configuration files:
	
	I0829 19:36:04.711753   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:36:04.720291   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:36:04.720349   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:36:04.729301   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:36:04.739449   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:36:04.739513   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:36:04.748786   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:36:04.757128   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:36:04.757175   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:36:04.767533   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:36:04.777322   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:36:04.777373   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:36:04.786269   78865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:36:04.795387   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:04.904530   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:05.430803   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:07.431525   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.764403   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.764546   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.265205   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.764700   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.264837   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.764871   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.264506   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.765230   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:08.265050   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.169466   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:08.669719   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:05.750216   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:05.949551   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:06.043930   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:06.140396   78865 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:36:06.140505   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.641069   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.141458   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.161360   78865 api_server.go:72] duration metric: took 1.020963124s to wait for apiserver process to appear ...
	I0829 19:36:07.161390   78865 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:36:07.161426   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.327675   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:36:10.327707   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:36:10.327721   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.396704   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:10.396737   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:10.661699   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.666518   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:10.666544   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:11.162227   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:11.167736   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:11.167774   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:11.662428   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:11.668688   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:11.668727   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:12.162372   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:12.168297   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0829 19:36:12.175933   78865 api_server.go:141] control plane version: v1.31.0
	I0829 19:36:12.175956   78865 api_server.go:131] duration metric: took 5.014557664s to wait for apiserver health ...
	I0829 19:36:12.175967   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:36:12.175975   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:36:12.177903   78865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:36:09.930962   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:11.932180   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:08.764431   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.264876   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.764481   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.265100   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.764720   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.264283   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.764890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.264425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.764965   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:13.264557   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.669915   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:13.169150   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:12.179056   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:36:12.202639   78865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:36:12.221804   78865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:36:12.242859   78865 system_pods.go:59] 8 kube-system pods found
	I0829 19:36:12.242897   78865 system_pods.go:61] "coredns-6f6b679f8f-j8zzh" [01eaffa5-a976-441c-987c-bdf3b7f72cd6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:36:12.242905   78865 system_pods.go:61] "etcd-no-preload-690795" [df54ae59-44ff-4f7b-b6c0-6145bdae3e44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:36:12.242912   78865 system_pods.go:61] "kube-apiserver-no-preload-690795" [aee247f2-1381-4571-a671-2cf140c78196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:36:12.242919   78865 system_pods.go:61] "kube-controller-manager-no-preload-690795" [69244a85-2778-46c8-a95c-d0f8a264c0cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:36:12.242923   78865 system_pods.go:61] "kube-proxy-q4mbt" [985478f9-235d-4922-a7fd-a0cbdddf3f68] Running
	I0829 19:36:12.242934   78865 system_pods.go:61] "kube-scheduler-no-preload-690795" [e1e141ab-eb79-4c87-bccd-274f1e7495b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:36:12.242940   78865 system_pods.go:61] "metrics-server-6867b74b74-svnwn" [e096a3dc-1166-4ee3-9f3f-e044064a5a13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:36:12.242945   78865 system_pods.go:61] "storage-provisioner" [6fc868fa-2221-45ad-903e-cd3d2297a3e6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:36:12.242952   78865 system_pods.go:74] duration metric: took 21.125083ms to wait for pod list to return data ...
	I0829 19:36:12.242962   78865 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:36:12.253567   78865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:36:12.253598   78865 node_conditions.go:123] node cpu capacity is 2
	I0829 19:36:12.253612   78865 node_conditions.go:105] duration metric: took 10.645029ms to run NodePressure ...
	I0829 19:36:12.253634   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:12.514683   78865 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:36:12.520060   78865 kubeadm.go:739] kubelet initialised
	I0829 19:36:12.520082   78865 kubeadm.go:740] duration metric: took 5.371928ms waiting for restarted kubelet to initialise ...
	I0829 19:36:12.520088   78865 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:36:12.524795   78865 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:14.533484   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:14.430676   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:16.930723   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:13.765038   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.264547   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.764878   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.264485   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.765114   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.264694   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.764599   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.264540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.764523   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:18.264855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.668846   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:17.669308   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:17.031326   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:19.530568   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:19.430550   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:21.431080   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:23.431736   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:18.764781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.264280   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.764653   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.264908   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.764855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.265180   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.764470   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.264751   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.765034   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:23.264498   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.168590   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:22.168898   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:21.531983   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:22.032162   78865 pod_ready.go:93] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:22.032187   78865 pod_ready.go:82] duration metric: took 9.507358099s for pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:22.032200   78865 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.038935   78865 pod_ready.go:93] pod "etcd-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.038956   78865 pod_ready.go:82] duration metric: took 1.006750868s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.038966   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.043258   78865 pod_ready.go:93] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.043278   78865 pod_ready.go:82] duration metric: took 4.305789ms for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.043298   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.049140   78865 pod_ready.go:93] pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.049159   78865 pod_ready.go:82] duration metric: took 5.852855ms for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.049170   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-q4mbt" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.055033   78865 pod_ready.go:93] pod "kube-proxy-q4mbt" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.055054   78865 pod_ready.go:82] duration metric: took 5.87681ms for pod "kube-proxy-q4mbt" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.055067   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.229706   78865 pod_ready.go:93] pod "kube-scheduler-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.229734   78865 pod_ready.go:82] duration metric: took 174.6598ms for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.229748   78865 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:25.237141   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:25.930818   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.430312   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:23.764384   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.265090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.765183   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.264966   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.764429   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.264774   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.765090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.264524   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.764810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:28.264541   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.169024   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:26.169599   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.668840   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:27.736899   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:30.235632   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:30.430611   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.930362   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.764771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.764735   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.265228   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.764328   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.264312   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.764627   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.264891   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.765104   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:33.264462   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.669561   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.671106   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.236488   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:34.736240   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:34.931264   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:37.430665   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:33.764540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.265004   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.764934   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.264439   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.764982   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.264780   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.765081   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.264865   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.764612   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:37.764705   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:37.803674   79869 cri.go:89] found id: ""
	I0829 19:36:37.803704   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.803715   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:37.803724   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:37.803783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:37.836465   79869 cri.go:89] found id: ""
	I0829 19:36:37.836494   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.836504   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:37.836512   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:37.836574   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:37.870224   79869 cri.go:89] found id: ""
	I0829 19:36:37.870248   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.870256   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:37.870262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:37.870326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:37.904152   79869 cri.go:89] found id: ""
	I0829 19:36:37.904179   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.904187   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:37.904194   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:37.904267   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:37.939182   79869 cri.go:89] found id: ""
	I0829 19:36:37.939211   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.939220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:37.939228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:37.939293   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:37.975761   79869 cri.go:89] found id: ""
	I0829 19:36:37.975790   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.975800   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:37.975808   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:37.975910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:38.008407   79869 cri.go:89] found id: ""
	I0829 19:36:38.008430   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.008437   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:38.008444   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:38.008497   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:38.041327   79869 cri.go:89] found id: ""
	I0829 19:36:38.041360   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.041370   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:38.041381   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:38.041395   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:38.091167   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:38.091214   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:38.105093   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:38.105126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:38.227564   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:38.227599   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:38.227616   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:38.298287   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:38.298327   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:35.172336   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:37.671072   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:36.736855   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:38.736902   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:39.929907   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:41.930998   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:40.836221   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:40.849288   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:40.849357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:40.882705   79869 cri.go:89] found id: ""
	I0829 19:36:40.882732   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.882739   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:40.882745   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:40.882791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:40.917639   79869 cri.go:89] found id: ""
	I0829 19:36:40.917667   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.917679   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:40.917687   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:40.917738   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:40.953804   79869 cri.go:89] found id: ""
	I0829 19:36:40.953843   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.953854   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:40.953863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:40.953925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:40.987341   79869 cri.go:89] found id: ""
	I0829 19:36:40.987376   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.987388   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:40.987396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:40.987462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:41.026247   79869 cri.go:89] found id: ""
	I0829 19:36:41.026277   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.026290   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:41.026303   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:41.026372   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:41.064160   79869 cri.go:89] found id: ""
	I0829 19:36:41.064185   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.064194   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:41.064201   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:41.064278   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:41.115081   79869 cri.go:89] found id: ""
	I0829 19:36:41.115113   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.115124   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:41.115131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:41.115206   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:41.165472   79869 cri.go:89] found id: ""
	I0829 19:36:41.165501   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.165511   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:41.165521   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:41.165536   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:41.219322   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:41.219357   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:41.232410   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:41.232443   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:41.296216   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:41.296235   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:41.296246   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:41.375784   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:41.375824   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:40.169548   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:42.672996   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:41.236777   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.736150   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.931489   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:45.933439   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:48.431152   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.914181   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:43.926643   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:43.926716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:43.963266   79869 cri.go:89] found id: ""
	I0829 19:36:43.963289   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.963297   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:43.963303   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:43.963350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:43.998886   79869 cri.go:89] found id: ""
	I0829 19:36:43.998917   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.998926   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:43.998930   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:43.998975   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:44.033142   79869 cri.go:89] found id: ""
	I0829 19:36:44.033174   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.033183   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:44.033189   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:44.033244   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:44.066986   79869 cri.go:89] found id: ""
	I0829 19:36:44.067019   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.067031   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:44.067038   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:44.067106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:44.100228   79869 cri.go:89] found id: ""
	I0829 19:36:44.100261   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.100272   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:44.100279   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:44.100340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:44.134511   79869 cri.go:89] found id: ""
	I0829 19:36:44.134536   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.134543   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:44.134549   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:44.134615   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:44.170586   79869 cri.go:89] found id: ""
	I0829 19:36:44.170619   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.170631   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:44.170639   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:44.170692   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:44.205349   79869 cri.go:89] found id: ""
	I0829 19:36:44.205377   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.205388   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:44.205398   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:44.205413   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:44.218874   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:44.218903   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:44.294221   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:44.294241   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:44.294253   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:44.373258   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:44.373293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:44.414355   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:44.414384   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:46.964371   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:46.976756   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:46.976827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:47.009512   79869 cri.go:89] found id: ""
	I0829 19:36:47.009537   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.009547   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:47.009555   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:47.009608   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:47.042141   79869 cri.go:89] found id: ""
	I0829 19:36:47.042177   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.042190   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:47.042199   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:47.042265   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:47.074680   79869 cri.go:89] found id: ""
	I0829 19:36:47.074707   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.074718   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:47.074726   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:47.074783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:47.107014   79869 cri.go:89] found id: ""
	I0829 19:36:47.107042   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.107051   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:47.107059   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:47.107107   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:47.139770   79869 cri.go:89] found id: ""
	I0829 19:36:47.139795   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.139804   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:47.139810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:47.139862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:47.174463   79869 cri.go:89] found id: ""
	I0829 19:36:47.174502   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.174521   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:47.174532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:47.174580   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:47.206935   79869 cri.go:89] found id: ""
	I0829 19:36:47.206958   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.206966   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:47.206972   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:47.207035   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:47.250798   79869 cri.go:89] found id: ""
	I0829 19:36:47.250822   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.250829   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:47.250836   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:47.250847   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:47.320803   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:47.320824   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:47.320850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:47.394344   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:47.394379   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:47.439451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:47.439481   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:47.491070   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:47.491106   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:45.169686   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:47.169784   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:46.236187   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:48.736605   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.431543   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:52.931361   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.006196   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:50.020169   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:50.020259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:50.059323   79869 cri.go:89] found id: ""
	I0829 19:36:50.059353   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.059373   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:50.059380   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:50.059442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:50.095389   79869 cri.go:89] found id: ""
	I0829 19:36:50.095419   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.095430   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:50.095437   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:50.095499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:50.128133   79869 cri.go:89] found id: ""
	I0829 19:36:50.128162   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.128173   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:50.128180   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:50.128238   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:50.160999   79869 cri.go:89] found id: ""
	I0829 19:36:50.161021   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.161030   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:50.161035   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:50.161081   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:50.195246   79869 cri.go:89] found id: ""
	I0829 19:36:50.195268   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.195276   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:50.195282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:50.195329   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:50.229232   79869 cri.go:89] found id: ""
	I0829 19:36:50.229263   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.229273   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:50.229280   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:50.229340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:50.265141   79869 cri.go:89] found id: ""
	I0829 19:36:50.265169   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.265180   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:50.265188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:50.265251   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:50.299896   79869 cri.go:89] found id: ""
	I0829 19:36:50.299928   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.299940   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:50.299949   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:50.299963   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:50.313408   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:50.313431   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:50.382019   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:50.382037   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:50.382049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:50.462174   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:50.462211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:50.499944   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:50.499971   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.050299   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:53.064866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:53.064963   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:53.098468   79869 cri.go:89] found id: ""
	I0829 19:36:53.098492   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.098500   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:53.098506   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:53.098555   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:53.130323   79869 cri.go:89] found id: ""
	I0829 19:36:53.130354   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.130377   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:53.130385   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:53.130445   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:53.175911   79869 cri.go:89] found id: ""
	I0829 19:36:53.175941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.175951   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:53.175968   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:53.176033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:53.209834   79869 cri.go:89] found id: ""
	I0829 19:36:53.209865   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.209874   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:53.209881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:53.209959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:53.246277   79869 cri.go:89] found id: ""
	I0829 19:36:53.246322   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.246332   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:53.246340   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:53.246401   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:53.283911   79869 cri.go:89] found id: ""
	I0829 19:36:53.283941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.283953   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:53.283962   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:53.284024   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:53.315217   79869 cri.go:89] found id: ""
	I0829 19:36:53.315247   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.315257   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:53.315265   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:53.315328   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:53.348341   79869 cri.go:89] found id: ""
	I0829 19:36:53.348392   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.348405   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:53.348417   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:53.348436   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.399841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:53.399879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:53.414453   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:53.414491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:53.490003   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:53.490023   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:53.490042   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:53.565162   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:53.565198   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:49.669984   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:52.168756   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.736642   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:53.236282   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:55.430710   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:57.430791   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:56.106051   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:56.119263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:56.119345   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:56.160104   79869 cri.go:89] found id: ""
	I0829 19:36:56.160131   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.160138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:56.160144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:56.160192   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:56.196028   79869 cri.go:89] found id: ""
	I0829 19:36:56.196054   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.196062   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:56.196067   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:56.196113   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:56.229503   79869 cri.go:89] found id: ""
	I0829 19:36:56.229532   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.229539   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:56.229553   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:56.229602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:56.263904   79869 cri.go:89] found id: ""
	I0829 19:36:56.263934   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.263944   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:56.263951   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:56.264013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:56.295579   79869 cri.go:89] found id: ""
	I0829 19:36:56.295607   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.295618   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:56.295625   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:56.295680   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:56.328514   79869 cri.go:89] found id: ""
	I0829 19:36:56.328548   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.328556   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:56.328563   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:56.328620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:56.361388   79869 cri.go:89] found id: ""
	I0829 19:36:56.361418   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.361426   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:56.361431   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:56.361508   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:56.393312   79869 cri.go:89] found id: ""
	I0829 19:36:56.393345   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.393354   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:56.393362   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:56.393372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:56.446431   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:56.446472   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:56.459086   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:56.459112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:56.525526   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:56.525554   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:56.525569   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:56.609554   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:56.609592   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:54.169625   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:56.169688   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:58.170249   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:55.736101   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:58.235887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:00.236133   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:59.931992   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.430785   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:59.148291   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:59.162462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:59.162524   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:59.199732   79869 cri.go:89] found id: ""
	I0829 19:36:59.199761   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.199771   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:59.199780   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:59.199861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:59.232285   79869 cri.go:89] found id: ""
	I0829 19:36:59.232324   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.232335   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:59.232345   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:59.232415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:59.266424   79869 cri.go:89] found id: ""
	I0829 19:36:59.266452   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.266463   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:59.266471   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:59.266536   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:59.306707   79869 cri.go:89] found id: ""
	I0829 19:36:59.306733   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.306742   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:59.306748   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:59.306807   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:59.345114   79869 cri.go:89] found id: ""
	I0829 19:36:59.345144   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.345154   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:59.345162   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:59.345225   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:59.382940   79869 cri.go:89] found id: ""
	I0829 19:36:59.382963   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.382971   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:59.382977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:59.383031   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:59.420066   79869 cri.go:89] found id: ""
	I0829 19:36:59.420088   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.420095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:59.420101   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:59.420146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:59.457355   79869 cri.go:89] found id: ""
	I0829 19:36:59.457377   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.457385   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:59.457392   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:59.457409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:59.528868   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:59.528893   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:59.528908   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:59.612849   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:59.612886   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:59.649036   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:59.649064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:59.703071   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:59.703105   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.216020   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:02.229270   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:02.229351   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:02.266857   79869 cri.go:89] found id: ""
	I0829 19:37:02.266885   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.266897   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:02.266904   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:02.266967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:02.304473   79869 cri.go:89] found id: ""
	I0829 19:37:02.304501   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.304512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:02.304520   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:02.304590   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:02.338219   79869 cri.go:89] found id: ""
	I0829 19:37:02.338244   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.338253   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:02.338261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:02.338323   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:02.370974   79869 cri.go:89] found id: ""
	I0829 19:37:02.371006   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.371017   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:02.371025   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:02.371084   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:02.405871   79869 cri.go:89] found id: ""
	I0829 19:37:02.405895   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.405902   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:02.405908   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:02.405955   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:02.438516   79869 cri.go:89] found id: ""
	I0829 19:37:02.438543   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.438554   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:02.438568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:02.438630   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:02.471180   79869 cri.go:89] found id: ""
	I0829 19:37:02.471205   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.471213   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:02.471218   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:02.471276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:02.503203   79869 cri.go:89] found id: ""
	I0829 19:37:02.503227   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.503237   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:02.503248   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:02.503262   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:02.555303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:02.555337   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.567903   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:02.567927   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:02.641377   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:02.641403   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:02.641418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:02.717475   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:02.717522   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:00.669482   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.669691   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.237155   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:04.237334   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:04.431033   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:06.431419   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.431901   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:05.257326   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:05.270641   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:05.270717   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:05.303873   79869 cri.go:89] found id: ""
	I0829 19:37:05.303901   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.303909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:05.303915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:05.303959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:05.345153   79869 cri.go:89] found id: ""
	I0829 19:37:05.345176   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.345184   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:05.345189   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:05.345245   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:05.379032   79869 cri.go:89] found id: ""
	I0829 19:37:05.379059   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.379067   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:05.379073   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:05.379135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:05.412432   79869 cri.go:89] found id: ""
	I0829 19:37:05.412465   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.412476   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:05.412484   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:05.412538   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:05.445441   79869 cri.go:89] found id: ""
	I0829 19:37:05.445464   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.445471   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:05.445477   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:05.445527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:05.478921   79869 cri.go:89] found id: ""
	I0829 19:37:05.478949   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.478957   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:05.478964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:05.479011   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:05.509821   79869 cri.go:89] found id: ""
	I0829 19:37:05.509849   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.509859   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:05.509866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:05.509924   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:05.541409   79869 cri.go:89] found id: ""
	I0829 19:37:05.541435   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.541443   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:05.541451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:05.541464   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.590569   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:05.590601   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:05.604071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:05.604101   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:05.685233   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:05.685262   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:05.685277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:05.761082   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:05.761112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.299816   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:08.312964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:08.313037   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:08.344710   79869 cri.go:89] found id: ""
	I0829 19:37:08.344737   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.344745   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:08.344755   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:08.344820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:08.378185   79869 cri.go:89] found id: ""
	I0829 19:37:08.378210   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.378217   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:08.378223   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:08.378272   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:08.410619   79869 cri.go:89] found id: ""
	I0829 19:37:08.410645   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.410663   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:08.410670   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:08.410729   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:08.445494   79869 cri.go:89] found id: ""
	I0829 19:37:08.445522   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.445531   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:08.445540   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:08.445601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:08.478225   79869 cri.go:89] found id: ""
	I0829 19:37:08.478249   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.478258   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:08.478263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:08.478311   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:08.512006   79869 cri.go:89] found id: ""
	I0829 19:37:08.512032   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.512042   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:08.512049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:08.512111   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:08.546800   79869 cri.go:89] found id: ""
	I0829 19:37:08.546831   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.546841   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:08.546848   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:08.546911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:08.580353   79869 cri.go:89] found id: ""
	I0829 19:37:08.580383   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.580394   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:08.580405   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:08.580418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:08.661004   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:08.661041   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.708548   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:08.708581   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.168832   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:07.669695   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:06.736029   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.736415   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:10.930895   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:13.430209   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.761385   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:08.761418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:08.774365   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:08.774392   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:08.839864   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.340781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:11.353417   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:11.353492   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:11.388836   79869 cri.go:89] found id: ""
	I0829 19:37:11.388864   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.388873   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:11.388879   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:11.388925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:11.429655   79869 cri.go:89] found id: ""
	I0829 19:37:11.429685   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.429695   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:11.429703   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:11.429761   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:11.462122   79869 cri.go:89] found id: ""
	I0829 19:37:11.462157   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.462166   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:11.462174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:11.462236   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:11.495955   79869 cri.go:89] found id: ""
	I0829 19:37:11.495985   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.495996   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:11.496003   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:11.496063   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:11.529394   79869 cri.go:89] found id: ""
	I0829 19:37:11.529427   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.529438   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:11.529446   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:11.529513   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:11.565804   79869 cri.go:89] found id: ""
	I0829 19:37:11.565830   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.565838   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:11.565844   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:11.565903   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:11.601786   79869 cri.go:89] found id: ""
	I0829 19:37:11.601815   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.601825   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:11.601832   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:11.601889   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:11.638213   79869 cri.go:89] found id: ""
	I0829 19:37:11.638234   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.638242   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:11.638250   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:11.638260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:11.651085   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:11.651113   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:11.716834   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.716858   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:11.716872   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:11.804266   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:11.804310   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:11.846655   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:11.846684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:10.168947   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:12.669439   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:11.236100   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:13.236138   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:15.930954   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.931355   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:14.408512   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:14.420973   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:14.421033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:14.456516   79869 cri.go:89] found id: ""
	I0829 19:37:14.456540   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.456548   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:14.456553   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:14.456604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:14.489480   79869 cri.go:89] found id: ""
	I0829 19:37:14.489502   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.489512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:14.489517   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:14.489562   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:14.521821   79869 cri.go:89] found id: ""
	I0829 19:37:14.521849   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.521857   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:14.521863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:14.521911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:14.557084   79869 cri.go:89] found id: ""
	I0829 19:37:14.557116   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.557125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:14.557131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:14.557180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:14.590979   79869 cri.go:89] found id: ""
	I0829 19:37:14.591009   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.591019   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:14.591027   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:14.591088   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:14.624022   79869 cri.go:89] found id: ""
	I0829 19:37:14.624047   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.624057   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:14.624066   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:14.624131   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:14.656100   79869 cri.go:89] found id: ""
	I0829 19:37:14.656133   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.656145   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:14.656153   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:14.656214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:14.694241   79869 cri.go:89] found id: ""
	I0829 19:37:14.694276   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.694289   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:14.694302   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:14.694317   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.748276   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:14.748312   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:14.761340   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:14.761361   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:14.834815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:14.834842   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:14.834857   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:14.909857   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:14.909898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.453264   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:17.466704   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:17.466776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:17.500163   79869 cri.go:89] found id: ""
	I0829 19:37:17.500193   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.500205   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:17.500212   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:17.500269   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:17.532155   79869 cri.go:89] found id: ""
	I0829 19:37:17.532182   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.532192   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:17.532200   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:17.532259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:17.564710   79869 cri.go:89] found id: ""
	I0829 19:37:17.564737   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.564747   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:17.564754   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:17.564816   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:17.597056   79869 cri.go:89] found id: ""
	I0829 19:37:17.597091   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.597103   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:17.597111   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:17.597173   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:17.633398   79869 cri.go:89] found id: ""
	I0829 19:37:17.633424   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.633434   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:17.633442   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:17.633506   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:17.666201   79869 cri.go:89] found id: ""
	I0829 19:37:17.666243   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.666254   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:17.666262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:17.666324   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:17.700235   79869 cri.go:89] found id: ""
	I0829 19:37:17.700259   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.700266   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:17.700273   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:17.700320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:17.732060   79869 cri.go:89] found id: ""
	I0829 19:37:17.732090   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.732100   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:17.732110   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:17.732126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:17.747071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:17.747107   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:17.816644   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:17.816665   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:17.816677   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:17.895084   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:17.895134   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.935093   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:17.935125   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.669895   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.170115   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:15.736101   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.736304   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:19.736492   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:20.429878   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:22.430233   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:20.484693   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:20.497977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:20.498043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:20.531361   79869 cri.go:89] found id: ""
	I0829 19:37:20.531389   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.531400   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:20.531408   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:20.531469   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:20.569556   79869 cri.go:89] found id: ""
	I0829 19:37:20.569583   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.569594   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:20.569603   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:20.569668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:20.602350   79869 cri.go:89] found id: ""
	I0829 19:37:20.602377   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.602385   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:20.602391   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:20.602448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:20.637274   79869 cri.go:89] found id: ""
	I0829 19:37:20.637305   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.637319   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:20.637327   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:20.637388   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:20.686169   79869 cri.go:89] found id: ""
	I0829 19:37:20.686196   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.686204   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:20.686210   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:20.686257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:20.722745   79869 cri.go:89] found id: ""
	I0829 19:37:20.722775   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.722786   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:20.722794   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:20.722856   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:20.757314   79869 cri.go:89] found id: ""
	I0829 19:37:20.757337   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.757344   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:20.757349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:20.757398   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:20.790802   79869 cri.go:89] found id: ""
	I0829 19:37:20.790834   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.790844   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:20.790855   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:20.790870   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:20.840866   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:20.840898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:20.854053   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:20.854098   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:20.921717   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:20.921746   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:20.921761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:21.003362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:21.003398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:23.541356   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:23.554621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:23.554699   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:23.588155   79869 cri.go:89] found id: ""
	I0829 19:37:23.588190   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.588199   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:23.588207   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:23.588273   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:23.622917   79869 cri.go:89] found id: ""
	I0829 19:37:23.622945   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.622954   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:23.622960   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:23.623016   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:23.658615   79869 cri.go:89] found id: ""
	I0829 19:37:23.658648   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.658657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:23.658663   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:23.658720   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:23.693196   79869 cri.go:89] found id: ""
	I0829 19:37:23.693224   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.693234   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:23.693242   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:23.693309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:23.728285   79869 cri.go:89] found id: ""
	I0829 19:37:23.728317   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.728328   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:23.728336   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:23.728399   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:19.668651   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:21.669949   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:23.670402   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:22.235749   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:24.236078   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:24.431492   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:26.930440   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:23.763713   79869 cri.go:89] found id: ""
	I0829 19:37:23.763741   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.763751   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:23.763759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:23.763812   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:23.797776   79869 cri.go:89] found id: ""
	I0829 19:37:23.797801   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.797809   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:23.797814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:23.797863   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:23.832108   79869 cri.go:89] found id: ""
	I0829 19:37:23.832139   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.832151   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:23.832161   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:23.832175   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:23.880460   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:23.880490   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:23.893251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:23.893280   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:23.962079   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:23.962127   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:23.962140   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:24.048048   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:24.048088   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:26.593169   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:26.606349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:26.606426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:26.643119   79869 cri.go:89] found id: ""
	I0829 19:37:26.643143   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.643155   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:26.643161   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:26.643216   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:26.681555   79869 cri.go:89] found id: ""
	I0829 19:37:26.681579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.681591   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:26.681597   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:26.681655   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:26.718440   79869 cri.go:89] found id: ""
	I0829 19:37:26.718469   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.718479   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:26.718486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:26.718549   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:26.755249   79869 cri.go:89] found id: ""
	I0829 19:37:26.755274   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.755284   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:26.755292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:26.755356   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:26.790554   79869 cri.go:89] found id: ""
	I0829 19:37:26.790579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.790590   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:26.790597   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:26.790665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:26.826492   79869 cri.go:89] found id: ""
	I0829 19:37:26.826521   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.826530   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:26.826537   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:26.826600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:26.863456   79869 cri.go:89] found id: ""
	I0829 19:37:26.863487   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.863499   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:26.863508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:26.863579   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:26.897637   79869 cri.go:89] found id: ""
	I0829 19:37:26.897670   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.897683   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:26.897694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:26.897709   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:26.978362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:26.978400   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:27.016212   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:27.016245   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:27.078350   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:27.078386   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:27.101701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:27.101744   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:27.186720   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:26.168605   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:28.170938   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:26.735518   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:28.737503   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:29.431222   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:31.931202   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:29.686902   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:29.699814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:29.699885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:29.733867   79869 cri.go:89] found id: ""
	I0829 19:37:29.733893   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.733904   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:29.733911   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:29.733970   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:29.767910   79869 cri.go:89] found id: ""
	I0829 19:37:29.767937   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.767946   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:29.767952   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:29.767998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:29.801085   79869 cri.go:89] found id: ""
	I0829 19:37:29.801109   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.801117   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:29.801122   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:29.801166   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:29.834215   79869 cri.go:89] found id: ""
	I0829 19:37:29.834238   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.834246   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:29.834251   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:29.834307   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:29.872761   79869 cri.go:89] found id: ""
	I0829 19:37:29.872785   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.872793   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:29.872803   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:29.872847   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:29.909354   79869 cri.go:89] found id: ""
	I0829 19:37:29.909385   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.909395   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:29.909408   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:29.909468   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:29.941359   79869 cri.go:89] found id: ""
	I0829 19:37:29.941383   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.941390   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:29.941396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:29.941451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:29.973694   79869 cri.go:89] found id: ""
	I0829 19:37:29.973726   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.973736   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:29.973746   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:29.973761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:30.024863   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:30.024896   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.039092   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:30.039119   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:30.106106   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:30.106128   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:30.106143   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:30.183254   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:30.183289   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:32.722665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:32.736188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:32.736261   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:32.773039   79869 cri.go:89] found id: ""
	I0829 19:37:32.773065   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.773073   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:32.773082   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:32.773144   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:32.818204   79869 cri.go:89] found id: ""
	I0829 19:37:32.818234   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.818245   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:32.818252   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:32.818313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:32.862902   79869 cri.go:89] found id: ""
	I0829 19:37:32.862932   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.862942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:32.862949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:32.863009   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:32.908338   79869 cri.go:89] found id: ""
	I0829 19:37:32.908369   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.908380   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:32.908388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:32.908452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:32.941717   79869 cri.go:89] found id: ""
	I0829 19:37:32.941746   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.941757   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:32.941765   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:32.941827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:32.975777   79869 cri.go:89] found id: ""
	I0829 19:37:32.975806   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.975818   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:32.975827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:32.975885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:33.007518   79869 cri.go:89] found id: ""
	I0829 19:37:33.007551   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.007563   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:33.007570   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:33.007638   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:33.039902   79869 cri.go:89] found id: ""
	I0829 19:37:33.039924   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.039931   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:33.039946   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:33.039958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:33.111691   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:33.111720   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:33.111734   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:33.191036   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:33.191067   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:33.228850   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:33.228882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:33.282314   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:33.282351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.668490   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:32.669630   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:31.235788   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:33.735661   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:33.931996   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.932964   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:38.429817   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.796597   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:35.809357   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:35.809437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:35.841747   79869 cri.go:89] found id: ""
	I0829 19:37:35.841774   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.841783   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:35.841792   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:35.841850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:35.875614   79869 cri.go:89] found id: ""
	I0829 19:37:35.875639   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.875650   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:35.875657   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:35.875718   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:35.910547   79869 cri.go:89] found id: ""
	I0829 19:37:35.910571   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.910579   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:35.910585   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:35.910647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:35.949505   79869 cri.go:89] found id: ""
	I0829 19:37:35.949526   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.949533   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:35.949538   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:35.949583   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:35.984331   79869 cri.go:89] found id: ""
	I0829 19:37:35.984369   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.984381   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:35.984388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:35.984451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:36.018870   79869 cri.go:89] found id: ""
	I0829 19:37:36.018897   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.018909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:36.018917   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:36.018976   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:36.053557   79869 cri.go:89] found id: ""
	I0829 19:37:36.053593   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.053603   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:36.053611   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:36.053668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:36.087217   79869 cri.go:89] found id: ""
	I0829 19:37:36.087243   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.087254   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:36.087264   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:36.087282   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:36.141546   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:36.141577   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:36.155496   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:36.155524   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:36.225014   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:36.225038   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:36.225052   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:36.304399   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:36.304442   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:35.168843   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:37.169415   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.736103   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:37.736554   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:40.235995   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:40.430698   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.430836   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:38.842368   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:38.856085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:38.856160   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:38.893989   79869 cri.go:89] found id: ""
	I0829 19:37:38.894016   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.894024   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:38.894030   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:38.894075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:38.926756   79869 cri.go:89] found id: ""
	I0829 19:37:38.926784   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.926792   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:38.926798   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:38.926859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:38.966346   79869 cri.go:89] found id: ""
	I0829 19:37:38.966370   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.966379   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:38.966385   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:38.966442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:39.000266   79869 cri.go:89] found id: ""
	I0829 19:37:39.000291   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.000298   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:39.000307   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:39.000355   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:39.037243   79869 cri.go:89] found id: ""
	I0829 19:37:39.037269   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.037277   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:39.037282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:39.037347   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:39.068823   79869 cri.go:89] found id: ""
	I0829 19:37:39.068852   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.068864   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:39.068872   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:39.068936   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:39.099649   79869 cri.go:89] found id: ""
	I0829 19:37:39.099674   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.099682   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:39.099689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:39.099748   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:39.131764   79869 cri.go:89] found id: ""
	I0829 19:37:39.131786   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.131794   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:39.131802   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:39.131814   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:39.188087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:39.188123   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:39.200989   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:39.201015   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:39.279230   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:39.279257   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:39.279271   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:39.358667   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:39.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:41.897833   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:41.911145   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:41.911219   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:41.947096   79869 cri.go:89] found id: ""
	I0829 19:37:41.947122   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.947133   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:41.947141   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:41.947203   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:41.984267   79869 cri.go:89] found id: ""
	I0829 19:37:41.984301   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.984309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:41.984315   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:41.984384   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:42.018170   79869 cri.go:89] found id: ""
	I0829 19:37:42.018198   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.018209   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:42.018217   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:42.018281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:42.058245   79869 cri.go:89] found id: ""
	I0829 19:37:42.058269   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.058278   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:42.058283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:42.058327   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:42.093182   79869 cri.go:89] found id: ""
	I0829 19:37:42.093214   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.093226   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:42.093233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:42.093299   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:42.126013   79869 cri.go:89] found id: ""
	I0829 19:37:42.126041   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.126050   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:42.126058   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:42.126136   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:42.166568   79869 cri.go:89] found id: ""
	I0829 19:37:42.166660   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.166675   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:42.166683   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:42.166763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:42.204904   79869 cri.go:89] found id: ""
	I0829 19:37:42.204930   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.204938   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:42.204947   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:42.204960   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:42.262487   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:42.262533   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:42.275703   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:42.275730   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:42.341375   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:42.341394   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:42.341408   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:42.420981   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:42.421021   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:39.670059   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.169724   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.237785   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.736417   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.929743   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:46.930603   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.965267   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:44.979151   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:44.979204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:45.020423   79869 cri.go:89] found id: ""
	I0829 19:37:45.020448   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.020456   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:45.020461   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:45.020521   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:45.058200   79869 cri.go:89] found id: ""
	I0829 19:37:45.058225   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.058233   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:45.058238   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:45.058286   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:45.093886   79869 cri.go:89] found id: ""
	I0829 19:37:45.093909   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.093917   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:45.093923   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:45.093968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:45.127630   79869 cri.go:89] found id: ""
	I0829 19:37:45.127663   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.127674   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:45.127681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:45.127742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:45.160643   79869 cri.go:89] found id: ""
	I0829 19:37:45.160669   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.160679   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:45.160685   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:45.160742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:45.196010   79869 cri.go:89] found id: ""
	I0829 19:37:45.196035   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.196043   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:45.196050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:45.196101   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:45.229297   79869 cri.go:89] found id: ""
	I0829 19:37:45.229375   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.229395   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:45.229405   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:45.229461   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:45.267244   79869 cri.go:89] found id: ""
	I0829 19:37:45.267271   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.267281   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:45.267292   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:45.267306   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:45.280179   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:45.280201   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:45.352318   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:45.352339   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:45.352351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:45.432702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:45.432732   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:45.470540   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:45.470564   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.019771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:48.032745   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:48.032819   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:48.066895   79869 cri.go:89] found id: ""
	I0829 19:37:48.066921   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.066930   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:48.066938   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:48.066998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:48.104824   79869 cri.go:89] found id: ""
	I0829 19:37:48.104853   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.104861   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:48.104866   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:48.104931   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:48.140964   79869 cri.go:89] found id: ""
	I0829 19:37:48.140990   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.140998   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:48.141004   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:48.141051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:48.174550   79869 cri.go:89] found id: ""
	I0829 19:37:48.174578   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.174587   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:48.174593   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:48.174647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:48.207397   79869 cri.go:89] found id: ""
	I0829 19:37:48.207422   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.207430   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:48.207437   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:48.207495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:48.240948   79869 cri.go:89] found id: ""
	I0829 19:37:48.240970   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.240978   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:48.240983   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:48.241027   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:48.281058   79869 cri.go:89] found id: ""
	I0829 19:37:48.281087   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.281095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:48.281100   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:48.281151   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:48.315511   79869 cri.go:89] found id: ""
	I0829 19:37:48.315541   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.315552   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:48.315564   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:48.315580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.367680   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:48.367714   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:48.380251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:48.380285   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:48.449432   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:48.449452   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:48.449467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:48.525529   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:48.525563   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:44.669068   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:47.169440   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:46.737461   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:49.236079   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:49.431026   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.931134   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.064580   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:51.077351   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:51.077430   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:51.110018   79869 cri.go:89] found id: ""
	I0829 19:37:51.110049   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.110058   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:51.110063   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:51.110138   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:51.143667   79869 cri.go:89] found id: ""
	I0829 19:37:51.143700   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.143711   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:51.143719   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:51.143791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:51.178193   79869 cri.go:89] found id: ""
	I0829 19:37:51.178221   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.178229   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:51.178235   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:51.178285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:51.212323   79869 cri.go:89] found id: ""
	I0829 19:37:51.212352   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.212359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:51.212366   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:51.212413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:51.245724   79869 cri.go:89] found id: ""
	I0829 19:37:51.245745   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.245752   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:51.245758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:51.245832   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:51.278424   79869 cri.go:89] found id: ""
	I0829 19:37:51.278448   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.278456   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:51.278462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:51.278509   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:51.309469   79869 cri.go:89] found id: ""
	I0829 19:37:51.309498   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.309508   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:51.309516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:51.309602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:51.342596   79869 cri.go:89] found id: ""
	I0829 19:37:51.342625   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.342639   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:51.342650   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:51.342664   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:51.394045   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:51.394083   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:51.407902   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:51.407934   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:51.480759   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:51.480782   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:51.480797   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:51.565533   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:51.565570   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:49.671574   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:52.168702   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.237371   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:53.736122   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:54.430278   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.431024   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:54.107142   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:54.121083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:54.121141   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:54.156019   79869 cri.go:89] found id: ""
	I0829 19:37:54.156042   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.156050   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:54.156056   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:54.156106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:54.188748   79869 cri.go:89] found id: ""
	I0829 19:37:54.188772   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.188783   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:54.188790   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:54.188851   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:54.222044   79869 cri.go:89] found id: ""
	I0829 19:37:54.222079   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.222112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:54.222132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:54.222214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:54.254710   79869 cri.go:89] found id: ""
	I0829 19:37:54.254740   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.254750   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:54.254759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:54.254820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:54.292053   79869 cri.go:89] found id: ""
	I0829 19:37:54.292078   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.292086   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:54.292092   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:54.292153   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:54.330528   79869 cri.go:89] found id: ""
	I0829 19:37:54.330561   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.330573   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:54.330580   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:54.330653   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:54.363571   79869 cri.go:89] found id: ""
	I0829 19:37:54.363594   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.363602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:54.363608   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:54.363669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:54.395112   79869 cri.go:89] found id: ""
	I0829 19:37:54.395144   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.395166   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:54.395178   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:54.395192   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:54.408701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:54.408729   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:54.474198   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:54.474218   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:54.474231   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:54.555430   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:54.555467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.592858   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:54.592893   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.144165   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:57.157368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:57.157437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:57.194662   79869 cri.go:89] found id: ""
	I0829 19:37:57.194693   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.194706   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:57.194721   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:57.194784   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:57.226822   79869 cri.go:89] found id: ""
	I0829 19:37:57.226848   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.226856   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:57.226862   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:57.226910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:57.263892   79869 cri.go:89] found id: ""
	I0829 19:37:57.263932   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.263945   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:57.263955   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:57.264018   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:57.301202   79869 cri.go:89] found id: ""
	I0829 19:37:57.301243   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.301255   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:57.301261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:57.301317   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:57.335291   79869 cri.go:89] found id: ""
	I0829 19:37:57.335321   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.335337   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:57.335343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:57.335392   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:57.368961   79869 cri.go:89] found id: ""
	I0829 19:37:57.368983   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.368992   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:57.368997   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:57.369042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:57.401813   79869 cri.go:89] found id: ""
	I0829 19:37:57.401837   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.401844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:57.401850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:57.401906   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:57.434719   79869 cri.go:89] found id: ""
	I0829 19:37:57.434745   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.434756   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:57.434765   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:57.434777   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.484182   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:57.484217   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:57.497025   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:57.497051   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:57.569752   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:57.569776   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:57.569789   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:57.651276   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:57.651324   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.169824   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.668831   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.236564   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:58.736176   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:58.930996   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.931806   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.430980   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.189981   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:00.204723   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:00.204794   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:00.241677   79869 cri.go:89] found id: ""
	I0829 19:38:00.241700   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.241707   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:00.241713   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:00.241768   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:00.278692   79869 cri.go:89] found id: ""
	I0829 19:38:00.278726   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.278736   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:00.278744   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:00.278801   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:00.310418   79869 cri.go:89] found id: ""
	I0829 19:38:00.310448   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.310459   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:00.310466   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:00.310528   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:00.348423   79869 cri.go:89] found id: ""
	I0829 19:38:00.348446   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.348453   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:00.348459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:00.348511   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:00.380954   79869 cri.go:89] found id: ""
	I0829 19:38:00.380978   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.380985   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:00.380991   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:00.381043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:00.414783   79869 cri.go:89] found id: ""
	I0829 19:38:00.414812   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.414823   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:00.414831   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:00.414895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:00.450606   79869 cri.go:89] found id: ""
	I0829 19:38:00.450634   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.450642   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:00.450647   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:00.450696   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:00.485337   79869 cri.go:89] found id: ""
	I0829 19:38:00.485360   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.485375   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:00.485382   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:00.485399   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:00.551481   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:00.551502   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:00.551513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:00.630781   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:00.630819   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:00.676339   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:00.676363   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:00.728420   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:00.728452   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.243268   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:03.256259   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:03.256359   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:03.291103   79869 cri.go:89] found id: ""
	I0829 19:38:03.291131   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.291138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:03.291144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:03.291190   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:03.327866   79869 cri.go:89] found id: ""
	I0829 19:38:03.327898   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.327909   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:03.327917   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:03.327986   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:03.359082   79869 cri.go:89] found id: ""
	I0829 19:38:03.359110   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.359121   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:03.359129   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:03.359183   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:03.392714   79869 cri.go:89] found id: ""
	I0829 19:38:03.392741   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.392751   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:03.392758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:03.392823   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:03.427785   79869 cri.go:89] found id: ""
	I0829 19:38:03.427812   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.427820   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:03.427827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:03.427888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:03.463136   79869 cri.go:89] found id: ""
	I0829 19:38:03.463161   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.463171   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:03.463177   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:03.463230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:03.496188   79869 cri.go:89] found id: ""
	I0829 19:38:03.496225   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.496237   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:03.496244   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:03.496295   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:03.529566   79869 cri.go:89] found id: ""
	I0829 19:38:03.529591   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.529600   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:03.529609   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:03.529619   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:03.584787   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:03.584828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.599464   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:03.599509   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:03.676743   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:03.676763   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:03.676774   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:59.169059   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:01.668656   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.669716   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.736901   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.236263   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:05.431293   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:07.930953   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.757552   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:03.757605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.297887   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:06.311413   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:06.311498   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:06.345494   79869 cri.go:89] found id: ""
	I0829 19:38:06.345529   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.345539   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:06.345546   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:06.345605   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:06.377646   79869 cri.go:89] found id: ""
	I0829 19:38:06.377680   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.377691   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:06.377698   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:06.377809   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:06.416770   79869 cri.go:89] found id: ""
	I0829 19:38:06.416799   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.416810   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:06.416817   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:06.416869   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:06.451995   79869 cri.go:89] found id: ""
	I0829 19:38:06.452024   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.452034   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:06.452040   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:06.452095   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:06.484604   79869 cri.go:89] found id: ""
	I0829 19:38:06.484631   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.484642   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:06.484650   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:06.484713   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:06.517955   79869 cri.go:89] found id: ""
	I0829 19:38:06.517981   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.517988   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:06.517994   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:06.518053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:06.551069   79869 cri.go:89] found id: ""
	I0829 19:38:06.551100   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.551111   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:06.551118   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:06.551178   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:06.585340   79869 cri.go:89] found id: ""
	I0829 19:38:06.585367   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.585379   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:06.585389   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:06.585416   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:06.637942   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:06.637977   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:06.652097   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:06.652124   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:06.738226   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:06.738252   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:06.738268   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:06.817478   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:06.817519   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.168530   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:08.169657   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:05.736429   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:08.236731   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:09.931677   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.431484   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:09.360441   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:09.373372   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:09.373431   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:09.409942   79869 cri.go:89] found id: ""
	I0829 19:38:09.409970   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.409981   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:09.409989   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:09.410050   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:09.444611   79869 cri.go:89] found id: ""
	I0829 19:38:09.444639   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.444647   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:09.444652   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:09.444701   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:09.478206   79869 cri.go:89] found id: ""
	I0829 19:38:09.478233   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.478240   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:09.478246   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:09.478305   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:09.510313   79869 cri.go:89] found id: ""
	I0829 19:38:09.510340   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.510356   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:09.510361   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:09.510419   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:09.545380   79869 cri.go:89] found id: ""
	I0829 19:38:09.545412   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.545422   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:09.545429   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:09.545495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:09.578560   79869 cri.go:89] found id: ""
	I0829 19:38:09.578591   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.578600   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:09.578606   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:09.578659   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:09.613445   79869 cri.go:89] found id: ""
	I0829 19:38:09.613476   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.613484   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:09.613490   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:09.613540   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:09.649933   79869 cri.go:89] found id: ""
	I0829 19:38:09.649961   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.649970   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:09.649981   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:09.649998   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:09.662471   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:09.662496   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:09.728562   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:09.728594   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:09.728610   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:09.813152   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:09.813187   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:09.852846   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:09.852879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.403437   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:12.429787   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:12.429872   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:12.470833   79869 cri.go:89] found id: ""
	I0829 19:38:12.470858   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.470866   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:12.470871   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:12.470947   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:12.502307   79869 cri.go:89] found id: ""
	I0829 19:38:12.502334   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.502343   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:12.502351   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:12.502411   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:12.535084   79869 cri.go:89] found id: ""
	I0829 19:38:12.535108   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.535114   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:12.535120   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:12.535182   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:12.571735   79869 cri.go:89] found id: ""
	I0829 19:38:12.571762   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.571772   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:12.571779   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:12.571838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:12.604987   79869 cri.go:89] found id: ""
	I0829 19:38:12.605020   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.605029   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:12.605036   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:12.605093   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:12.639257   79869 cri.go:89] found id: ""
	I0829 19:38:12.639281   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.639289   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:12.639300   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:12.639362   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:12.674790   79869 cri.go:89] found id: ""
	I0829 19:38:12.674811   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.674818   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:12.674824   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:12.674877   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:12.711132   79869 cri.go:89] found id: ""
	I0829 19:38:12.711156   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.711164   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:12.711172   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:12.711184   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.763916   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:12.763950   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:12.777071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:12.777100   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:12.844974   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:12.845002   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:12.845017   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:12.924646   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:12.924682   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:10.668769   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.669771   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:10.736651   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.737433   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:15.236521   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:14.930832   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:16.931496   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:15.465319   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:15.478237   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:15.478315   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:15.510066   79869 cri.go:89] found id: ""
	I0829 19:38:15.510113   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.510124   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:15.510132   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:15.510180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:15.543094   79869 cri.go:89] found id: ""
	I0829 19:38:15.543117   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.543125   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:15.543138   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:15.543189   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:15.577253   79869 cri.go:89] found id: ""
	I0829 19:38:15.577279   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.577286   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:15.577292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:15.577352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:15.612073   79869 cri.go:89] found id: ""
	I0829 19:38:15.612107   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.612119   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:15.612128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:15.612196   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:15.645565   79869 cri.go:89] found id: ""
	I0829 19:38:15.645587   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.645595   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:15.645601   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:15.645646   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:15.679991   79869 cri.go:89] found id: ""
	I0829 19:38:15.680018   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.680027   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:15.680033   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:15.680109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:15.713899   79869 cri.go:89] found id: ""
	I0829 19:38:15.713923   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.713931   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:15.713937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:15.713991   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:15.750559   79869 cri.go:89] found id: ""
	I0829 19:38:15.750590   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.750601   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:15.750613   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:15.750628   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:15.762918   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:15.762943   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:15.832171   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:15.832195   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:15.832211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:15.913268   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:15.913311   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:15.951909   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:15.951935   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:18.501587   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:18.514136   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:18.514198   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:18.546937   79869 cri.go:89] found id: ""
	I0829 19:38:18.546977   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.546986   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:18.546994   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:18.547059   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:18.579227   79869 cri.go:89] found id: ""
	I0829 19:38:18.579256   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.579267   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:18.579275   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:18.579350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:18.610639   79869 cri.go:89] found id: ""
	I0829 19:38:18.610665   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.610673   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:18.610678   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:18.610739   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:18.642646   79869 cri.go:89] found id: ""
	I0829 19:38:18.642672   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.642680   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:18.642689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:18.642744   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:18.678244   79869 cri.go:89] found id: ""
	I0829 19:38:18.678264   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.678271   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:18.678277   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:18.678341   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:18.709787   79869 cri.go:89] found id: ""
	I0829 19:38:18.709812   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.709820   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:18.709826   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:18.709876   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:14.669989   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:17.169402   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:17.736005   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:20.236887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:19.430240   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:21.930946   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:18.743570   79869 cri.go:89] found id: ""
	I0829 19:38:18.743593   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.743602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:18.743610   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:18.743671   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:18.776790   79869 cri.go:89] found id: ""
	I0829 19:38:18.776815   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.776823   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:18.776831   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:18.776842   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:18.791736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:18.791765   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:18.880815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:18.880835   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:18.880849   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:18.969263   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:18.969304   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:19.005813   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:19.005843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.559810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:21.572617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:21.572682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:21.606221   79869 cri.go:89] found id: ""
	I0829 19:38:21.606245   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.606253   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:21.606259   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:21.606310   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:21.637794   79869 cri.go:89] found id: ""
	I0829 19:38:21.637822   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.637830   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:21.637835   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:21.637888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:21.671484   79869 cri.go:89] found id: ""
	I0829 19:38:21.671505   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.671515   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:21.671521   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:21.671576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:21.707212   79869 cri.go:89] found id: ""
	I0829 19:38:21.707240   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.707250   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:21.707257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:21.707320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:21.742944   79869 cri.go:89] found id: ""
	I0829 19:38:21.742964   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.742971   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:21.742977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:21.743023   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:21.779919   79869 cri.go:89] found id: ""
	I0829 19:38:21.779940   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.779947   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:21.779952   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:21.780007   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:21.819817   79869 cri.go:89] found id: ""
	I0829 19:38:21.819848   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.819858   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:21.819866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:21.819926   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:21.853791   79869 cri.go:89] found id: ""
	I0829 19:38:21.853817   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.853825   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:21.853833   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:21.853843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:21.890519   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:21.890550   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.943940   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:21.943972   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:21.956697   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:21.956724   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:22.030470   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:22.030495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:22.030513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:19.170077   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:21.670142   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:23.672076   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:22.237387   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:24.737069   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:23.934621   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:26.430632   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:24.608719   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:24.624343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:24.624403   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:24.679480   79869 cri.go:89] found id: ""
	I0829 19:38:24.679507   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.679514   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:24.679520   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:24.679589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:24.714065   79869 cri.go:89] found id: ""
	I0829 19:38:24.714114   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.714127   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:24.714134   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:24.714194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:24.751382   79869 cri.go:89] found id: ""
	I0829 19:38:24.751408   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.751417   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:24.751422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:24.751481   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:24.783549   79869 cri.go:89] found id: ""
	I0829 19:38:24.783573   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.783580   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:24.783588   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:24.783643   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:24.815500   79869 cri.go:89] found id: ""
	I0829 19:38:24.815524   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.815532   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:24.815539   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:24.815594   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:24.848225   79869 cri.go:89] found id: ""
	I0829 19:38:24.848249   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.848258   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:24.848264   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:24.848321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:24.880473   79869 cri.go:89] found id: ""
	I0829 19:38:24.880500   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.880511   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:24.880520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:24.880587   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:24.912907   79869 cri.go:89] found id: ""
	I0829 19:38:24.912941   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.912959   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:24.912967   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:24.912996   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:24.985389   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:24.985420   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:24.985437   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:25.060555   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:25.060591   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:25.099073   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:25.099099   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:25.149434   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:25.149473   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:27.664027   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:27.677971   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:27.678042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:27.715124   79869 cri.go:89] found id: ""
	I0829 19:38:27.715166   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.715179   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:27.715188   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:27.715255   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:27.748316   79869 cri.go:89] found id: ""
	I0829 19:38:27.748348   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.748361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:27.748370   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:27.748439   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:27.782075   79869 cri.go:89] found id: ""
	I0829 19:38:27.782116   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.782128   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:27.782137   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:27.782194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:27.821517   79869 cri.go:89] found id: ""
	I0829 19:38:27.821545   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.821554   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:27.821562   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:27.821621   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:27.853619   79869 cri.go:89] found id: ""
	I0829 19:38:27.853643   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.853654   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:27.853668   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:27.853723   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:27.886790   79869 cri.go:89] found id: ""
	I0829 19:38:27.886814   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.886822   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:27.886828   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:27.886883   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:27.920756   79869 cri.go:89] found id: ""
	I0829 19:38:27.920779   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.920789   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:27.920802   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:27.920861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:27.959241   79869 cri.go:89] found id: ""
	I0829 19:38:27.959267   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.959279   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:27.959289   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:27.959302   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:27.999922   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:27.999945   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:28.050616   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:28.050655   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:28.066437   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:28.066470   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:28.137427   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:28.137451   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:28.137466   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:26.168927   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:28.169453   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:27.235855   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:29.236537   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:28.929913   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:30.930403   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:32.931280   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:30.721890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:30.736387   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:30.736462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:30.773230   79869 cri.go:89] found id: ""
	I0829 19:38:30.773290   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.773304   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:30.773315   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:30.773382   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:30.806234   79869 cri.go:89] found id: ""
	I0829 19:38:30.806261   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.806271   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:30.806279   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:30.806344   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:30.841608   79869 cri.go:89] found id: ""
	I0829 19:38:30.841650   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.841674   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:30.841684   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:30.841751   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:30.875926   79869 cri.go:89] found id: ""
	I0829 19:38:30.875952   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.875960   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:30.875966   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:30.876020   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:30.914312   79869 cri.go:89] found id: ""
	I0829 19:38:30.914334   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.914341   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:30.914347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:30.914406   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:30.948819   79869 cri.go:89] found id: ""
	I0829 19:38:30.948854   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.948865   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:30.948876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:30.948937   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:30.980573   79869 cri.go:89] found id: ""
	I0829 19:38:30.980606   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.980617   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:30.980627   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:30.980688   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:31.012024   79869 cri.go:89] found id: ""
	I0829 19:38:31.012052   79869 logs.go:276] 0 containers: []
	W0829 19:38:31.012061   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:31.012071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:31.012089   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:31.076870   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:31.076896   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:31.076907   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:31.156257   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:31.156293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:31.192883   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:31.192911   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:31.246303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:31.246342   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:30.169738   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:32.669256   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:31.736303   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:34.235284   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:35.430450   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:37.931562   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:33.760372   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:33.773924   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:33.773998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:33.810019   79869 cri.go:89] found id: ""
	I0829 19:38:33.810047   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.810057   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:33.810064   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:33.810146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:33.848706   79869 cri.go:89] found id: ""
	I0829 19:38:33.848735   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.848747   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:33.848754   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:33.848822   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:33.880689   79869 cri.go:89] found id: ""
	I0829 19:38:33.880718   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.880731   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:33.880739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:33.880803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:33.911962   79869 cri.go:89] found id: ""
	I0829 19:38:33.911990   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.912000   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:33.912008   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:33.912071   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:33.948432   79869 cri.go:89] found id: ""
	I0829 19:38:33.948457   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.948468   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:33.948474   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:33.948534   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:33.981818   79869 cri.go:89] found id: ""
	I0829 19:38:33.981851   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.981859   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:33.981866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:33.981923   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:34.022072   79869 cri.go:89] found id: ""
	I0829 19:38:34.022108   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.022118   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:34.022125   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:34.022185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:34.055881   79869 cri.go:89] found id: ""
	I0829 19:38:34.055909   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.055920   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:34.055930   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:34.055944   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:34.133046   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:34.133079   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:34.175426   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:34.175457   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:34.228789   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:34.228825   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:34.243272   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:34.243322   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:34.318761   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:36.819665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:36.832516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:36.832604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:36.866781   79869 cri.go:89] found id: ""
	I0829 19:38:36.866815   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.866826   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:36.866833   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:36.866895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:36.903289   79869 cri.go:89] found id: ""
	I0829 19:38:36.903319   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.903329   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:36.903335   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:36.903383   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:36.936691   79869 cri.go:89] found id: ""
	I0829 19:38:36.936714   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.936722   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:36.936727   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:36.936776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:36.969496   79869 cri.go:89] found id: ""
	I0829 19:38:36.969525   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.969535   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:36.969541   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:36.969589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:37.001683   79869 cri.go:89] found id: ""
	I0829 19:38:37.001707   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.001715   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:37.001720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:37.001765   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:37.041189   79869 cri.go:89] found id: ""
	I0829 19:38:37.041212   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.041223   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:37.041231   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:37.041281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:37.077041   79869 cri.go:89] found id: ""
	I0829 19:38:37.077067   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.077075   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:37.077080   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:37.077135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:37.110478   79869 cri.go:89] found id: ""
	I0829 19:38:37.110506   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.110514   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:37.110523   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:37.110535   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:37.162560   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:37.162598   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:37.176466   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:37.176491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:37.244843   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:37.244861   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:37.244874   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:37.323324   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:37.323362   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:35.169023   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:37.668411   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:36.236332   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:38.236971   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:40.237468   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:39.932147   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.430752   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:39.864755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:39.877730   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:39.877789   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:39.909828   79869 cri.go:89] found id: ""
	I0829 19:38:39.909864   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.909874   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:39.909880   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:39.909941   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:39.943492   79869 cri.go:89] found id: ""
	I0829 19:38:39.943513   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.943521   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:39.943528   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:39.943586   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:39.976346   79869 cri.go:89] found id: ""
	I0829 19:38:39.976382   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.976393   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:39.976401   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:39.976455   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:40.008764   79869 cri.go:89] found id: ""
	I0829 19:38:40.008793   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.008803   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:40.008810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:40.008871   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:40.040324   79869 cri.go:89] found id: ""
	I0829 19:38:40.040356   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.040373   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:40.040381   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:40.040448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:40.072836   79869 cri.go:89] found id: ""
	I0829 19:38:40.072867   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.072880   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:40.072888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:40.072938   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:40.105437   79869 cri.go:89] found id: ""
	I0829 19:38:40.105462   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.105470   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:40.105476   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:40.105520   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:40.139447   79869 cri.go:89] found id: ""
	I0829 19:38:40.139480   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.139491   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:40.139502   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:40.139517   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.177799   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:40.177828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:40.227087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:40.227118   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:40.241116   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:40.241139   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:40.305556   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:40.305576   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:40.305590   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:42.886493   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:42.900941   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:42.901013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:42.938904   79869 cri.go:89] found id: ""
	I0829 19:38:42.938925   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.938933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:42.938946   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:42.939012   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:42.975186   79869 cri.go:89] found id: ""
	I0829 19:38:42.975213   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.975221   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:42.975227   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:42.975288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:43.009115   79869 cri.go:89] found id: ""
	I0829 19:38:43.009144   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.009152   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:43.009157   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:43.009207   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:43.044948   79869 cri.go:89] found id: ""
	I0829 19:38:43.044977   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.044987   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:43.044995   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:43.045057   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:43.079699   79869 cri.go:89] found id: ""
	I0829 19:38:43.079725   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.079732   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:43.079739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:43.079804   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:43.113742   79869 cri.go:89] found id: ""
	I0829 19:38:43.113770   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.113780   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:43.113788   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:43.113850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:43.151852   79869 cri.go:89] found id: ""
	I0829 19:38:43.151876   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.151884   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:43.151889   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:43.151939   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:43.190832   79869 cri.go:89] found id: ""
	I0829 19:38:43.190854   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.190862   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:43.190869   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:43.190882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:43.242651   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:43.242683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:43.256378   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:43.256403   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:43.333657   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:43.333684   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:43.333696   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:43.409811   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:43.409850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.170246   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.669492   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.737831   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:45.236831   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:44.930652   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:46.930941   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:45.947709   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:45.960937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:45.961013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:45.993198   79869 cri.go:89] found id: ""
	I0829 19:38:45.993230   79869 logs.go:276] 0 containers: []
	W0829 19:38:45.993242   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:45.993249   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:45.993303   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:46.031110   79869 cri.go:89] found id: ""
	I0829 19:38:46.031137   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.031148   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:46.031157   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:46.031212   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:46.065062   79869 cri.go:89] found id: ""
	I0829 19:38:46.065085   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.065093   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:46.065099   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:46.065155   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:46.099092   79869 cri.go:89] found id: ""
	I0829 19:38:46.099114   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.099122   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:46.099128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:46.099177   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:46.132426   79869 cri.go:89] found id: ""
	I0829 19:38:46.132450   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.132459   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:46.132464   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:46.132517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:46.165289   79869 cri.go:89] found id: ""
	I0829 19:38:46.165320   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.165337   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:46.165346   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:46.165415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:46.198761   79869 cri.go:89] found id: ""
	I0829 19:38:46.198786   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.198793   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:46.198799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:46.198859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:46.230621   79869 cri.go:89] found id: ""
	I0829 19:38:46.230649   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.230659   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:46.230669   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:46.230683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:46.280364   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:46.280398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:46.292854   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:46.292878   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:46.358673   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:46.358694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:46.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:46.439653   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:46.439688   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:44.669939   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:47.168670   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:47.735386   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:49.736163   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:49.431741   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:51.931271   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:48.975568   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:48.988793   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:48.988857   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:49.023697   79869 cri.go:89] found id: ""
	I0829 19:38:49.023721   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.023730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:49.023736   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:49.023791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:49.060131   79869 cri.go:89] found id: ""
	I0829 19:38:49.060153   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.060160   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:49.060166   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:49.060222   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:49.096069   79869 cri.go:89] found id: ""
	I0829 19:38:49.096101   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.096112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:49.096119   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:49.096185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:49.130316   79869 cri.go:89] found id: ""
	I0829 19:38:49.130347   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.130359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:49.130367   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:49.130434   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:49.162853   79869 cri.go:89] found id: ""
	I0829 19:38:49.162877   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.162890   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:49.162896   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:49.162956   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:49.198555   79869 cri.go:89] found id: ""
	I0829 19:38:49.198581   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.198592   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:49.198598   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:49.198663   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:49.232521   79869 cri.go:89] found id: ""
	I0829 19:38:49.232550   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.232560   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:49.232568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:49.232626   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:49.268094   79869 cri.go:89] found id: ""
	I0829 19:38:49.268124   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.268134   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:49.268145   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:49.268161   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:49.320884   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:49.320918   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:49.334244   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:49.334273   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:49.404442   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.404464   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:49.404479   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:49.482413   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:49.482451   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.021406   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:52.035517   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:52.035600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:52.068868   79869 cri.go:89] found id: ""
	I0829 19:38:52.068902   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.068909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:52.068915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:52.068971   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:52.100503   79869 cri.go:89] found id: ""
	I0829 19:38:52.100533   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.100542   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:52.100548   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:52.100620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:52.135148   79869 cri.go:89] found id: ""
	I0829 19:38:52.135189   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.135201   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:52.135208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:52.135276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:52.174469   79869 cri.go:89] found id: ""
	I0829 19:38:52.174498   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.174508   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:52.174516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:52.174576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:52.206485   79869 cri.go:89] found id: ""
	I0829 19:38:52.206508   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.206515   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:52.206520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:52.206568   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:52.240053   79869 cri.go:89] found id: ""
	I0829 19:38:52.240073   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.240080   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:52.240085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:52.240143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:52.274473   79869 cri.go:89] found id: ""
	I0829 19:38:52.274497   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.274506   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:52.274513   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:52.274576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:52.306646   79869 cri.go:89] found id: ""
	I0829 19:38:52.306669   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.306678   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:52.306686   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:52.306698   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:52.383558   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:52.383615   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.421958   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:52.421988   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:52.478024   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:52.478059   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:52.490736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:52.490772   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:52.555670   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.169856   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:51.669655   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:52.236654   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:54.735292   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:53.931350   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.430287   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:58.432418   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:55.056273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:55.068074   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:55.068147   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:55.102268   79869 cri.go:89] found id: ""
	I0829 19:38:55.102298   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.102309   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:55.102317   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:55.102368   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:55.133730   79869 cri.go:89] found id: ""
	I0829 19:38:55.133763   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.133773   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:55.133784   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:55.133848   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:55.168902   79869 cri.go:89] found id: ""
	I0829 19:38:55.168932   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.168942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:55.168949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:55.169015   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:55.206190   79869 cri.go:89] found id: ""
	I0829 19:38:55.206220   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.206231   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:55.206241   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:55.206326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:55.240178   79869 cri.go:89] found id: ""
	I0829 19:38:55.240213   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.240224   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:55.240233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:55.240313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:55.272532   79869 cri.go:89] found id: ""
	I0829 19:38:55.272559   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.272569   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:55.272575   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:55.272636   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:55.305427   79869 cri.go:89] found id: ""
	I0829 19:38:55.305457   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.305467   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:55.305473   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:55.305522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:55.337444   79869 cri.go:89] found id: ""
	I0829 19:38:55.337477   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.337489   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:55.337502   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:55.337518   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:55.402988   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:55.403019   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:55.403034   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:55.479168   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:55.479202   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:55.516345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:55.516372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:55.566716   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:55.566749   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.080261   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:58.093884   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:58.093944   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:58.126772   79869 cri.go:89] found id: ""
	I0829 19:38:58.126799   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.126808   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:58.126814   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:58.126861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:58.158344   79869 cri.go:89] found id: ""
	I0829 19:38:58.158373   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.158385   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:58.158393   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:58.158458   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:58.191524   79869 cri.go:89] found id: ""
	I0829 19:38:58.191550   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.191561   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:58.191569   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:58.191635   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:58.223336   79869 cri.go:89] found id: ""
	I0829 19:38:58.223362   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.223370   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:58.223375   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:58.223433   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:58.256223   79869 cri.go:89] found id: ""
	I0829 19:38:58.256248   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.256256   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:58.256262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:58.256321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:58.290008   79869 cri.go:89] found id: ""
	I0829 19:38:58.290035   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.290044   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:58.290049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:58.290112   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:58.324441   79869 cri.go:89] found id: ""
	I0829 19:38:58.324471   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.324488   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:58.324495   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:58.324554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:58.357324   79869 cri.go:89] found id: ""
	I0829 19:38:58.357351   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.357361   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:58.357378   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:58.357394   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.370251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:58.370277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:58.461098   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:58.461123   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:58.461138   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:58.537222   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:58.537255   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:58.574012   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:58.574043   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:54.170237   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.668188   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:58.668309   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.736467   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:59.236483   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:00.930424   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:02.931161   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:01.125646   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:01.138389   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:01.138464   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:01.172278   79869 cri.go:89] found id: ""
	I0829 19:39:01.172305   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.172313   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:01.172319   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:01.172375   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:01.207408   79869 cri.go:89] found id: ""
	I0829 19:39:01.207444   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.207455   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:01.207462   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:01.207522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:01.242683   79869 cri.go:89] found id: ""
	I0829 19:39:01.242711   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.242721   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:01.242729   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:01.242788   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:01.275683   79869 cri.go:89] found id: ""
	I0829 19:39:01.275714   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.275730   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:01.275738   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:01.275803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:01.308039   79869 cri.go:89] found id: ""
	I0829 19:39:01.308063   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.308071   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:01.308078   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:01.308137   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:01.344382   79869 cri.go:89] found id: ""
	I0829 19:39:01.344406   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.344413   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:01.344418   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:01.344466   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:01.379942   79869 cri.go:89] found id: ""
	I0829 19:39:01.379964   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.379972   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:01.379977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:01.380021   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:01.414955   79869 cri.go:89] found id: ""
	I0829 19:39:01.414981   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.414989   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:01.414997   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:01.415008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:01.469174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:01.469206   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:01.482719   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:01.482743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:01.546713   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:01.546731   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:01.546742   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:01.630655   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:01.630689   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:00.668839   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:02.670762   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:01.236788   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:03.237406   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:05.430398   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.431044   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:04.167940   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:04.180881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:04.180948   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:04.214782   79869 cri.go:89] found id: ""
	I0829 19:39:04.214809   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.214818   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:04.214824   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:04.214878   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:04.248274   79869 cri.go:89] found id: ""
	I0829 19:39:04.248300   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.248309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:04.248316   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:04.248378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:04.280622   79869 cri.go:89] found id: ""
	I0829 19:39:04.280648   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.280657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:04.280681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:04.280749   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:04.313715   79869 cri.go:89] found id: ""
	I0829 19:39:04.313746   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.313754   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:04.313759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:04.313806   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:04.345179   79869 cri.go:89] found id: ""
	I0829 19:39:04.345201   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.345209   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:04.345214   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:04.345264   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:04.377264   79869 cri.go:89] found id: ""
	I0829 19:39:04.377294   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.377304   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:04.377315   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:04.377378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:04.410005   79869 cri.go:89] found id: ""
	I0829 19:39:04.410028   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.410034   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:04.410039   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:04.410109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:04.444345   79869 cri.go:89] found id: ""
	I0829 19:39:04.444373   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.444383   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:04.444393   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:04.444409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:04.488071   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:04.488103   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:04.539394   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:04.539427   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:04.552285   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:04.552320   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:04.620973   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:04.620992   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:04.621006   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.201149   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:07.213392   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:07.213452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:07.249778   79869 cri.go:89] found id: ""
	I0829 19:39:07.249801   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.249812   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:07.249817   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:07.249864   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:07.282763   79869 cri.go:89] found id: ""
	I0829 19:39:07.282792   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.282799   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:07.282805   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:07.282852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:07.316882   79869 cri.go:89] found id: ""
	I0829 19:39:07.316920   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.316932   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:07.316940   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:07.316990   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:07.348474   79869 cri.go:89] found id: ""
	I0829 19:39:07.348505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.348516   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:07.348532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:07.348606   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:07.381442   79869 cri.go:89] found id: ""
	I0829 19:39:07.381467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.381474   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:07.381479   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:07.381535   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:07.414935   79869 cri.go:89] found id: ""
	I0829 19:39:07.414968   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.414981   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:07.414990   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:07.415053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:07.448427   79869 cri.go:89] found id: ""
	I0829 19:39:07.448467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.448479   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:07.448486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:07.448544   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:07.480475   79869 cri.go:89] found id: ""
	I0829 19:39:07.480505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.480515   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:07.480526   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:07.480540   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:07.532732   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:07.532766   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:07.546366   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:07.546411   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:07.615661   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:07.615679   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:07.615690   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.696874   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:07.696909   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:05.169920   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.170223   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:05.735375   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.737017   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:10.235794   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:09.930945   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:11.931285   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:10.236118   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:10.249347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:10.249413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:10.280412   79869 cri.go:89] found id: ""
	I0829 19:39:10.280436   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.280446   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:10.280451   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:10.280499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:10.313091   79869 cri.go:89] found id: ""
	I0829 19:39:10.313119   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.313126   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:10.313132   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:10.313187   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:10.347208   79869 cri.go:89] found id: ""
	I0829 19:39:10.347243   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.347252   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:10.347257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:10.347306   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:10.380658   79869 cri.go:89] found id: ""
	I0829 19:39:10.380686   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.380696   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:10.380703   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:10.380750   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:10.412573   79869 cri.go:89] found id: ""
	I0829 19:39:10.412601   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.412613   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:10.412621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:10.412682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:10.449655   79869 cri.go:89] found id: ""
	I0829 19:39:10.449683   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.449691   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:10.449698   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:10.449759   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:10.485157   79869 cri.go:89] found id: ""
	I0829 19:39:10.485184   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.485195   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:10.485203   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:10.485262   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:10.522628   79869 cri.go:89] found id: ""
	I0829 19:39:10.522656   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.522666   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:10.522673   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:10.522684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:10.541079   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:10.541114   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:10.633462   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:10.633495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:10.633512   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:10.714315   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:10.714354   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:10.751345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:10.751371   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.306786   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:13.319368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:13.319447   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:13.353999   79869 cri.go:89] found id: ""
	I0829 19:39:13.354029   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.354039   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:13.354047   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:13.354124   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:13.386953   79869 cri.go:89] found id: ""
	I0829 19:39:13.386982   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.386992   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:13.387000   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:13.387053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:13.425835   79869 cri.go:89] found id: ""
	I0829 19:39:13.425860   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.425869   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:13.425876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:13.425942   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:13.462808   79869 cri.go:89] found id: ""
	I0829 19:39:13.462835   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.462843   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:13.462849   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:13.462895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:13.495194   79869 cri.go:89] found id: ""
	I0829 19:39:13.495228   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.495240   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:13.495248   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:13.495309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:13.527239   79869 cri.go:89] found id: ""
	I0829 19:39:13.527268   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.527277   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:13.527283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:13.527357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:13.559081   79869 cri.go:89] found id: ""
	I0829 19:39:13.559110   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.559121   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:13.559128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:13.559191   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:13.590723   79869 cri.go:89] found id: ""
	I0829 19:39:13.590748   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.590757   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:13.590767   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:13.590781   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.645718   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:13.645751   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:13.659224   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:13.659250   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:13.733532   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:13.733566   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:13.733580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:09.669065   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:12.169167   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:12.236756   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:14.237536   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:14.431203   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:16.930983   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:13.813639   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:13.813680   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.355269   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:16.377328   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:16.377395   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:16.437904   79869 cri.go:89] found id: ""
	I0829 19:39:16.437926   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.437933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:16.437939   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:16.437987   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:16.470254   79869 cri.go:89] found id: ""
	I0829 19:39:16.470279   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.470287   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:16.470293   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:16.470353   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:16.502125   79869 cri.go:89] found id: ""
	I0829 19:39:16.502165   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.502177   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:16.502186   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:16.502242   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:16.539754   79869 cri.go:89] found id: ""
	I0829 19:39:16.539781   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.539791   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:16.539799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:16.539862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:16.576191   79869 cri.go:89] found id: ""
	I0829 19:39:16.576218   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.576229   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:16.576236   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:16.576292   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:16.610183   79869 cri.go:89] found id: ""
	I0829 19:39:16.610208   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.610219   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:16.610226   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:16.610285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:16.642568   79869 cri.go:89] found id: ""
	I0829 19:39:16.642605   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.642614   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:16.642624   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:16.642689   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:16.675990   79869 cri.go:89] found id: ""
	I0829 19:39:16.676017   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.676025   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:16.676033   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:16.676049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:16.739204   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:16.739222   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:16.739233   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:16.816427   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:16.816460   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.851816   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:16.851850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:16.903922   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:16.903958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:14.169307   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:16.163640   79073 pod_ready.go:82] duration metric: took 4m0.000694226s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" ...
	E0829 19:39:16.163683   79073 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:39:16.163706   79073 pod_ready.go:39] duration metric: took 4m12.036045825s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:16.163738   79073 kubeadm.go:597] duration metric: took 4m20.35086556s to restartPrimaryControlPlane
	W0829 19:39:16.163795   79073 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:16.163827   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:16.736978   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.236047   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.431674   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:21.930447   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.418163   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:19.432617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:19.432676   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:19.464691   79869 cri.go:89] found id: ""
	I0829 19:39:19.464718   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.464730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:19.464737   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:19.464793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:19.496265   79869 cri.go:89] found id: ""
	I0829 19:39:19.496291   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.496302   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:19.496310   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:19.496397   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:19.527395   79869 cri.go:89] found id: ""
	I0829 19:39:19.527422   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.527433   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:19.527440   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:19.527501   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:19.558377   79869 cri.go:89] found id: ""
	I0829 19:39:19.558404   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.558414   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:19.558422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:19.558484   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:19.589687   79869 cri.go:89] found id: ""
	I0829 19:39:19.589710   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.589718   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:19.589724   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:19.589813   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:19.624051   79869 cri.go:89] found id: ""
	I0829 19:39:19.624077   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.624086   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:19.624097   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:19.624143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:19.656248   79869 cri.go:89] found id: ""
	I0829 19:39:19.656282   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.656293   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:19.656301   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:19.656364   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:19.689299   79869 cri.go:89] found id: ""
	I0829 19:39:19.689328   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.689338   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:19.689349   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:19.689365   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:19.739952   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:19.739982   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:19.753169   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:19.753197   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:19.816948   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:19.816971   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:19.816983   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:19.892233   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:19.892270   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.432456   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:22.444842   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:22.444915   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:22.475864   79869 cri.go:89] found id: ""
	I0829 19:39:22.475888   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.475899   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:22.475907   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:22.475954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:22.506824   79869 cri.go:89] found id: ""
	I0829 19:39:22.506851   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.506858   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:22.506864   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:22.506909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:22.544960   79869 cri.go:89] found id: ""
	I0829 19:39:22.544984   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.545002   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:22.545009   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:22.545074   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:22.584077   79869 cri.go:89] found id: ""
	I0829 19:39:22.584098   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.584106   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:22.584114   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:22.584169   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:22.621180   79869 cri.go:89] found id: ""
	I0829 19:39:22.621208   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.621220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:22.621228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:22.621288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:22.658111   79869 cri.go:89] found id: ""
	I0829 19:39:22.658139   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.658151   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:22.658158   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:22.658220   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:22.695654   79869 cri.go:89] found id: ""
	I0829 19:39:22.695679   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.695686   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:22.695692   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:22.695742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:22.733092   79869 cri.go:89] found id: ""
	I0829 19:39:22.733169   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.733184   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:22.733196   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:22.733212   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:22.808449   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:22.808469   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:22.808485   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:22.889239   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:22.889275   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.933487   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:22.933513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:22.983137   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:22.983178   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:21.236189   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:23.236347   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:25.237213   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:23.932634   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:26.431145   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:28.431496   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:25.496668   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:25.509508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:25.509572   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:25.544292   79869 cri.go:89] found id: ""
	I0829 19:39:25.544321   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.544334   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:25.544341   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:25.544400   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:25.576739   79869 cri.go:89] found id: ""
	I0829 19:39:25.576768   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.576779   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:25.576787   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:25.576840   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:25.608040   79869 cri.go:89] found id: ""
	I0829 19:39:25.608067   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.608075   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:25.608081   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:25.608127   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:25.639675   79869 cri.go:89] found id: ""
	I0829 19:39:25.639703   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.639712   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:25.639720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:25.639785   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:25.676966   79869 cri.go:89] found id: ""
	I0829 19:39:25.676995   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.677007   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:25.677014   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:25.677075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:25.712310   79869 cri.go:89] found id: ""
	I0829 19:39:25.712334   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.712341   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:25.712347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:25.712393   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:25.746172   79869 cri.go:89] found id: ""
	I0829 19:39:25.746196   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.746203   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:25.746208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:25.746257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:25.778476   79869 cri.go:89] found id: ""
	I0829 19:39:25.778497   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.778506   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:25.778514   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:25.778525   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:25.817791   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:25.817820   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:25.874597   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:25.874634   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:25.887469   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:25.887493   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:25.957308   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:25.957329   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:25.957348   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:28.536826   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:28.550981   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:28.551038   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:28.586607   79869 cri.go:89] found id: ""
	I0829 19:39:28.586636   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.586647   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:28.586656   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:28.586716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:28.627696   79869 cri.go:89] found id: ""
	I0829 19:39:28.627720   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.627728   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:28.627734   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:28.627793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:28.659877   79869 cri.go:89] found id: ""
	I0829 19:39:28.659906   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.659915   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:28.659920   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:28.659967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:28.694834   79869 cri.go:89] found id: ""
	I0829 19:39:28.694861   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.694868   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:28.694874   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:28.694934   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:28.728833   79869 cri.go:89] found id: ""
	I0829 19:39:28.728866   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.728878   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:28.728888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:28.728951   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:27.237871   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:29.735887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:30.931849   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:33.424593   79559 pod_ready.go:82] duration metric: took 4m0.000177098s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" ...
	E0829 19:39:33.424633   79559 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:39:33.424656   79559 pod_ready.go:39] duration metric: took 4m10.047294609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:33.424687   79559 kubeadm.go:597] duration metric: took 4m17.474785939s to restartPrimaryControlPlane
	W0829 19:39:33.424745   79559 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:33.424773   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:28.762236   79869 cri.go:89] found id: ""
	I0829 19:39:28.762269   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.762279   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:28.762286   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:28.762352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:28.794534   79869 cri.go:89] found id: ""
	I0829 19:39:28.794570   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.794583   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:28.794590   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:28.794660   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:28.827193   79869 cri.go:89] found id: ""
	I0829 19:39:28.827222   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.827233   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:28.827244   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:28.827260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:28.878905   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:28.878936   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:28.891795   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:28.891826   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:28.966249   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:28.966278   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:28.966294   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:29.044383   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:29.044417   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.582383   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:31.595250   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:31.595333   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:31.628763   79869 cri.go:89] found id: ""
	I0829 19:39:31.628791   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.628800   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:31.628805   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:31.628852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:31.663489   79869 cri.go:89] found id: ""
	I0829 19:39:31.663521   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.663531   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:31.663537   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:31.663598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:31.698248   79869 cri.go:89] found id: ""
	I0829 19:39:31.698275   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.698283   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:31.698289   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:31.698340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:31.732499   79869 cri.go:89] found id: ""
	I0829 19:39:31.732527   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.732536   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:31.732544   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:31.732601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:31.773831   79869 cri.go:89] found id: ""
	I0829 19:39:31.773853   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.773861   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:31.773866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:31.773909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:31.807713   79869 cri.go:89] found id: ""
	I0829 19:39:31.807739   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.807747   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:31.807753   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:31.807814   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:31.841846   79869 cri.go:89] found id: ""
	I0829 19:39:31.841874   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.841881   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:31.841887   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:31.841945   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:31.872713   79869 cri.go:89] found id: ""
	I0829 19:39:31.872736   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.872749   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:31.872760   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:31.872773   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:31.926299   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:31.926335   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:31.941134   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:31.941174   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:32.010600   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:32.010623   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:32.010638   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:32.091972   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:32.092008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.737021   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:34.236447   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:34.631695   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:34.644986   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:34.645051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:34.679788   79869 cri.go:89] found id: ""
	I0829 19:39:34.679816   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.679823   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:34.679832   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:34.679881   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:34.713113   79869 cri.go:89] found id: ""
	I0829 19:39:34.713139   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.713147   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:34.713152   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:34.713204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:34.745410   79869 cri.go:89] found id: ""
	I0829 19:39:34.745439   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.745451   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:34.745459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:34.745517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:34.779089   79869 cri.go:89] found id: ""
	I0829 19:39:34.779117   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.779125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:34.779132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:34.779179   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:34.810966   79869 cri.go:89] found id: ""
	I0829 19:39:34.810995   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.811004   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:34.811011   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:34.811075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:34.844859   79869 cri.go:89] found id: ""
	I0829 19:39:34.844894   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.844901   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:34.844907   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:34.844954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:34.876014   79869 cri.go:89] found id: ""
	I0829 19:39:34.876036   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.876044   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:34.876050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:34.876097   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:34.909383   79869 cri.go:89] found id: ""
	I0829 19:39:34.909412   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.909421   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:34.909429   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:34.909440   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:34.956841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:34.956875   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:34.969399   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:34.969423   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:35.034539   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:35.034574   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:35.034589   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:35.109702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:35.109743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:37.644897   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:37.658600   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:37.658665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:37.693604   79869 cri.go:89] found id: ""
	I0829 19:39:37.693638   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.693646   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:37.693655   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:37.693763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:37.727504   79869 cri.go:89] found id: ""
	I0829 19:39:37.727531   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.727538   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:37.727546   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:37.727598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:37.762755   79869 cri.go:89] found id: ""
	I0829 19:39:37.762778   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.762786   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:37.762792   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:37.762838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:37.799571   79869 cri.go:89] found id: ""
	I0829 19:39:37.799600   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.799611   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:37.799619   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:37.799669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:37.833599   79869 cri.go:89] found id: ""
	I0829 19:39:37.833632   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.833644   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:37.833651   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:37.833714   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:37.867877   79869 cri.go:89] found id: ""
	I0829 19:39:37.867901   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.867909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:37.867916   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:37.867968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:37.901439   79869 cri.go:89] found id: ""
	I0829 19:39:37.901467   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.901475   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:37.901480   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:37.901527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:37.936983   79869 cri.go:89] found id: ""
	I0829 19:39:37.937008   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.937016   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:37.937024   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:37.937035   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:38.016873   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:38.016917   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:38.052565   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:38.052605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:38.102174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:38.102210   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:38.115273   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:38.115298   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:38.186012   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:36.736406   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:39.235941   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:42.401382   79073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.237529155s)
	I0829 19:39:42.401460   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:42.428754   79073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:42.441896   79073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:42.456122   79073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:42.456147   79073 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:42.456190   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:42.471887   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:42.471947   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:42.483709   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:42.493000   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:42.493070   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:42.511916   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:42.520829   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:42.520891   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:42.530567   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:42.540199   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:42.540252   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:42.550058   79073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:42.596809   79073 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:39:42.596966   79073 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:42.706623   79073 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:42.706766   79073 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:42.706931   79073 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:39:42.717740   79073 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:40.686558   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:40.699240   79869 kubeadm.go:597] duration metric: took 4m4.589527641s to restartPrimaryControlPlane
	W0829 19:39:40.699313   79869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:40.699343   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:42.719760   79073 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:42.719862   79073 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:42.719929   79073 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:42.720023   79073 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:42.720079   79073 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:42.720144   79073 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:42.720193   79073 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:42.720248   79073 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:42.720315   79073 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:42.720386   79073 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:42.720459   79073 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:42.720496   79073 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:42.720555   79073 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:42.827328   79073 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:43.276222   79073 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:39:43.445594   79073 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:43.554811   79073 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:43.788184   79073 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:43.788791   79073 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:43.791871   79073 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:43.794448   79073 out.go:235]   - Booting up control plane ...
	I0829 19:39:43.794600   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:43.794702   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:43.794800   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:43.813894   79073 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:43.822272   79073 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:43.822357   79073 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:44.450706   79869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.75133723s)
	I0829 19:39:44.450782   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:44.464692   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:44.473894   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:44.483464   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:44.483483   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:44.483524   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:44.492228   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:44.492277   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:44.501349   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:44.510241   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:44.510295   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:44.519210   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.528256   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:44.528314   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.537658   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:44.546976   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:44.547027   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:44.556823   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:44.630397   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:39:44.630474   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:44.771729   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:44.771869   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:44.772018   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:39:44.944512   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:41.236034   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:43.236446   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:45.237605   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:44.947210   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:44.947320   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:44.947422   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:44.947540   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:44.947658   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:44.947781   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:44.947881   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:44.950819   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:44.950926   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:44.951022   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:44.951125   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:44.951174   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:44.951244   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:45.171698   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:45.287539   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:45.443576   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:45.594891   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:45.609143   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:45.610374   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:45.610440   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:45.746839   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:45.748753   79869 out.go:235]   - Booting up control plane ...
	I0829 19:39:45.748882   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:45.753577   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:45.754588   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:45.755463   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:45.760295   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:39:43.950283   79073 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:39:43.950458   79073 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:39:44.452956   79073 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.82915ms
	I0829 19:39:44.453068   79073 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:39:49.455000   79073 kubeadm.go:310] [api-check] The API server is healthy after 5.001789194s
	I0829 19:39:49.473145   79073 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:39:49.496760   79073 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:39:49.530950   79073 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:39:49.531148   79073 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-920571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:39:49.548546   79073 kubeadm.go:310] [bootstrap-token] Using token: bc4428.p8e3szrujohqnvnh
	I0829 19:39:47.735610   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:49.735833   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:49.549992   79073 out.go:235]   - Configuring RBAC rules ...
	I0829 19:39:49.550151   79073 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:39:49.558070   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:39:49.573758   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:39:49.579988   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:39:49.585250   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:39:49.592477   79073 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:39:49.863168   79073 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:39:50.294056   79073 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:39:50.862652   79073 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:39:50.863644   79073 kubeadm.go:310] 
	I0829 19:39:50.863717   79073 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:39:50.863729   79073 kubeadm.go:310] 
	I0829 19:39:50.863861   79073 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:39:50.863881   79073 kubeadm.go:310] 
	I0829 19:39:50.863917   79073 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:39:50.864019   79073 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:39:50.864101   79073 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:39:50.864111   79073 kubeadm.go:310] 
	I0829 19:39:50.864215   79073 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:39:50.864225   79073 kubeadm.go:310] 
	I0829 19:39:50.864298   79073 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:39:50.864312   79073 kubeadm.go:310] 
	I0829 19:39:50.864398   79073 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:39:50.864517   79073 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:39:50.864617   79073 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:39:50.864631   79073 kubeadm.go:310] 
	I0829 19:39:50.864743   79073 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:39:50.864856   79073 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:39:50.864869   79073 kubeadm.go:310] 
	I0829 19:39:50.864983   79073 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bc4428.p8e3szrujohqnvnh \
	I0829 19:39:50.865110   79073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:39:50.865142   79073 kubeadm.go:310] 	--control-plane 
	I0829 19:39:50.865152   79073 kubeadm.go:310] 
	I0829 19:39:50.865262   79073 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:39:50.865270   79073 kubeadm.go:310] 
	I0829 19:39:50.865370   79073 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bc4428.p8e3szrujohqnvnh \
	I0829 19:39:50.865527   79073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:39:50.866485   79073 kubeadm.go:310] W0829 19:39:42.565022    2509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:39:50.866852   79073 kubeadm.go:310] W0829 19:39:42.566073    2509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:39:50.866979   79073 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:39:50.867009   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:39:50.867020   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:39:50.868683   79073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:39:50.869952   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:39:50.880385   79073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:39:50.900028   79073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:39:50.900152   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:50.900187   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-920571 minikube.k8s.io/updated_at=2024_08_29T19_39_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=embed-certs-920571 minikube.k8s.io/primary=true
	I0829 19:39:51.090710   79073 ops.go:34] apiserver oom_adj: -16
	I0829 19:39:51.090865   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:51.591720   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:52.091579   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:52.591872   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:53.091671   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:53.591191   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.091640   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.591356   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.674005   79073 kubeadm.go:1113] duration metric: took 3.773916232s to wait for elevateKubeSystemPrivileges
	I0829 19:39:54.674046   79073 kubeadm.go:394] duration metric: took 4m58.910639816s to StartCluster
	I0829 19:39:54.674070   79073 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:39:54.674178   79073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:39:54.675789   79073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:39:54.676038   79073 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:39:54.676095   79073 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:39:54.676184   79073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-920571"
	I0829 19:39:54.676210   79073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-920571"
	I0829 19:39:54.676222   79073 addons.go:69] Setting metrics-server=true in profile "embed-certs-920571"
	I0829 19:39:54.676225   79073 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:39:54.676241   79073 addons.go:234] Setting addon metrics-server=true in "embed-certs-920571"
	I0829 19:39:54.676264   79073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-920571"
	I0829 19:39:54.676216   79073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-920571"
	W0829 19:39:54.676329   79073 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:39:54.676360   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	W0829 19:39:54.676392   79073 addons.go:243] addon metrics-server should already be in state true
	I0829 19:39:54.676455   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	I0829 19:39:54.676650   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676664   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676682   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.676684   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.676824   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676859   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.677794   79073 out.go:177] * Verifying Kubernetes components...
	I0829 19:39:54.679112   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:39:54.694669   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0829 19:39:54.694717   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0829 19:39:54.695090   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.695420   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.695532   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35861
	I0829 19:39:54.695640   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.695656   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.695925   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.695948   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.695951   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.696038   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.696266   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.696373   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.696392   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.696443   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.696600   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.696629   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.696745   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.697378   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.697413   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.702955   79073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-920571"
	W0829 19:39:54.702978   79073 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:39:54.703003   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	I0829 19:39:54.703347   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.703377   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.714194   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0829 19:39:54.714526   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0829 19:39:54.714735   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.714916   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.715368   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.715369   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.715389   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.715401   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.715712   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.715713   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.715944   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.715943   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.717556   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.717758   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.718972   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39097
	I0829 19:39:54.719212   79073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:39:54.719303   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.719212   79073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:39:52.236231   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:54.238843   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:54.719723   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.719735   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.720033   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.720307   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:39:54.720322   79073 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:39:54.720342   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.720533   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.720559   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.720952   79073 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:39:54.720975   79073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:39:54.720992   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.723754   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724174   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.724198   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724516   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.724684   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.724820   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.724879   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724973   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.725426   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.725466   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.725687   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.725827   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.725982   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.726117   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.743443   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37853
	I0829 19:39:54.744025   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.744590   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.744618   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.744908   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.745030   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.746560   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.746809   79073 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:39:54.746819   79073 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:39:54.746831   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.749422   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.749802   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.749827   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.749904   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.750058   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.750206   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.750320   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.902922   79073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:39:54.921933   79073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-920571" to be "Ready" ...
	I0829 19:39:54.936483   79073 node_ready.go:49] node "embed-certs-920571" has status "Ready":"True"
	I0829 19:39:54.936513   79073 node_ready.go:38] duration metric: took 14.542582ms for node "embed-certs-920571" to be "Ready" ...
	I0829 19:39:54.936524   79073 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:54.945389   79073 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:55.076394   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:39:55.076421   79073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:39:55.089140   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:39:55.096473   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:39:55.128207   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:39:55.128235   79073 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:39:55.186402   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:39:55.186429   79073 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:39:55.262731   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:39:55.548177   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.548217   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.548521   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.548542   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:55.548555   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.548564   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.548824   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.548857   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Closing plugin on server side
	I0829 19:39:55.548872   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:55.555956   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.555971   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.556210   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.556227   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.020289   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.020317   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.020610   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.020632   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.020642   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.020650   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.020912   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.020931   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.369749   79073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.106975723s)
	I0829 19:39:56.369809   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.369825   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.370119   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.370143   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.370154   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.370168   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.370407   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.370428   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.370440   79073 addons.go:475] Verifying addon metrics-server=true in "embed-certs-920571"
	I0829 19:39:56.373030   79073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 19:39:56.374322   79073 addons.go:510] duration metric: took 1.698226444s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 19:39:56.460329   79073 pod_ready.go:93] pod "etcd-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:39:56.460362   79073 pod_ready.go:82] duration metric: took 1.51494335s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:56.460375   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:58.467017   79073 pod_ready.go:93] pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:39:58.467040   79073 pod_ready.go:82] duration metric: took 2.006657264s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:58.467050   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:59.826535   79559 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.4017346s)
	I0829 19:39:59.826609   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:59.849311   79559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:59.859855   79559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:59.874237   79559 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:59.874262   79559 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:59.874315   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 19:39:59.883719   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:59.883785   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:59.893307   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 19:39:59.902478   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:59.902519   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:59.912664   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 19:39:59.932387   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:59.932443   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:59.948358   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 19:39:59.965812   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:59.965867   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:59.975437   79559 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:40:00.022167   79559 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:40:00.022347   79559 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:40:00.126622   79559 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:40:00.126777   79559 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:40:00.126914   79559 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:40:00.135123   79559 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:56.736712   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:59.235639   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:00.137714   79559 out.go:235]   - Generating certificates and keys ...
	I0829 19:40:00.137806   79559 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:40:00.137875   79559 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:40:00.138003   79559 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:40:00.138114   79559 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:40:00.138184   79559 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:40:00.138240   79559 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:40:00.138297   79559 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:40:00.138351   79559 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:40:00.138443   79559 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:40:00.138555   79559 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:40:00.138607   79559 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:40:00.138682   79559 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:40:00.368674   79559 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:40:00.454426   79559 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:40:00.576835   79559 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:40:00.650342   79559 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:40:01.038392   79559 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:40:01.038806   79559 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:40:01.041297   79559 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:40:01.043020   79559 out.go:235]   - Booting up control plane ...
	I0829 19:40:01.043127   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:40:01.043224   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:40:01.043501   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:40:01.062342   79559 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:40:01.068185   79559 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:40:01.068247   79559 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:40:01.202906   79559 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:40:01.203076   79559 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:40:01.705241   79559 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.336154ms
	I0829 19:40:01.705368   79559 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:40:00.476336   79073 pod_ready.go:103] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:02.973188   79073 pod_ready.go:103] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:03.473576   79073 pod_ready.go:93] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.473607   79073 pod_ready.go:82] duration metric: took 5.006550689s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.473616   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25cmq" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.478026   79073 pod_ready.go:93] pod "kube-proxy-25cmq" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.478045   79073 pod_ready.go:82] duration metric: took 4.423884ms for pod "kube-proxy-25cmq" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.478054   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.482541   79073 pod_ready.go:93] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.482560   79073 pod_ready.go:82] duration metric: took 4.499742ms for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.482566   79073 pod_ready.go:39] duration metric: took 8.54603076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:03.482581   79073 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:40:03.482623   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:40:03.502670   79073 api_server.go:72] duration metric: took 8.826595134s to wait for apiserver process to appear ...
	I0829 19:40:03.502695   79073 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:40:03.502718   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:40:03.507953   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0829 19:40:03.508948   79073 api_server.go:141] control plane version: v1.31.0
	I0829 19:40:03.508968   79073 api_server.go:131] duration metric: took 6.265433ms to wait for apiserver health ...
	I0829 19:40:03.508977   79073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:40:03.514929   79073 system_pods.go:59] 9 kube-system pods found
	I0829 19:40:03.514962   79073 system_pods.go:61] "coredns-6f6b679f8f-8qrn6" [af312704-4ea9-432d-85b2-67c59231187f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:03.514971   79073 system_pods.go:61] "coredns-6f6b679f8f-9f75n" [80f3b51d-fced-4cd9-8c43-2a4eea28e470] Running
	I0829 19:40:03.514979   79073 system_pods.go:61] "etcd-embed-certs-920571" [47af52f8-1d18-41fd-b013-e53fe813e4cf] Running
	I0829 19:40:03.514987   79073 system_pods.go:61] "kube-apiserver-embed-certs-920571" [1e9c3d77-55e1-4998-a2af-430254fde431] Running
	I0829 19:40:03.514994   79073 system_pods.go:61] "kube-controller-manager-embed-certs-920571" [08b33c9f-5858-41b7-a190-f34b611203ee] Running
	I0829 19:40:03.515000   79073 system_pods.go:61] "kube-proxy-25cmq" [35ecfe58-b448-4db0-b4cc-434422ec4ca6] Running
	I0829 19:40:03.515009   79073 system_pods.go:61] "kube-scheduler-embed-certs-920571" [ea37ea4f-390d-41d1-83b2-72fe1a09302b] Running
	I0829 19:40:03.515018   79073 system_pods.go:61] "metrics-server-6867b74b74-kb2c6" [8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:03.515027   79073 system_pods.go:61] "storage-provisioner" [741481e5-8e38-4522-a9df-4b36e6d5cf9c] Running
	I0829 19:40:03.515036   79073 system_pods.go:74] duration metric: took 6.052187ms to wait for pod list to return data ...
	I0829 19:40:03.515046   79073 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:40:03.518040   79073 default_sa.go:45] found service account: "default"
	I0829 19:40:03.518060   79073 default_sa.go:55] duration metric: took 3.004653ms for default service account to be created ...
	I0829 19:40:03.518069   79073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:40:03.523915   79073 system_pods.go:86] 9 kube-system pods found
	I0829 19:40:03.523942   79073 system_pods.go:89] "coredns-6f6b679f8f-8qrn6" [af312704-4ea9-432d-85b2-67c59231187f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:03.523949   79073 system_pods.go:89] "coredns-6f6b679f8f-9f75n" [80f3b51d-fced-4cd9-8c43-2a4eea28e470] Running
	I0829 19:40:03.523954   79073 system_pods.go:89] "etcd-embed-certs-920571" [47af52f8-1d18-41fd-b013-e53fe813e4cf] Running
	I0829 19:40:03.523958   79073 system_pods.go:89] "kube-apiserver-embed-certs-920571" [1e9c3d77-55e1-4998-a2af-430254fde431] Running
	I0829 19:40:03.523962   79073 system_pods.go:89] "kube-controller-manager-embed-certs-920571" [08b33c9f-5858-41b7-a190-f34b611203ee] Running
	I0829 19:40:03.523965   79073 system_pods.go:89] "kube-proxy-25cmq" [35ecfe58-b448-4db0-b4cc-434422ec4ca6] Running
	I0829 19:40:03.523968   79073 system_pods.go:89] "kube-scheduler-embed-certs-920571" [ea37ea4f-390d-41d1-83b2-72fe1a09302b] Running
	I0829 19:40:03.523973   79073 system_pods.go:89] "metrics-server-6867b74b74-kb2c6" [8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:03.523978   79073 system_pods.go:89] "storage-provisioner" [741481e5-8e38-4522-a9df-4b36e6d5cf9c] Running
	I0829 19:40:03.523986   79073 system_pods.go:126] duration metric: took 5.911567ms to wait for k8s-apps to be running ...
	I0829 19:40:03.523997   79073 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:40:03.524049   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:03.541502   79073 system_svc.go:56] duration metric: took 17.4955ms WaitForService to wait for kubelet
	I0829 19:40:03.541538   79073 kubeadm.go:582] duration metric: took 8.865466463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:40:03.541564   79073 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:40:03.544700   79073 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:40:03.544728   79073 node_conditions.go:123] node cpu capacity is 2
	I0829 19:40:03.544744   79073 node_conditions.go:105] duration metric: took 3.172559ms to run NodePressure ...
	I0829 19:40:03.544758   79073 start.go:241] waiting for startup goroutines ...
	I0829 19:40:03.544771   79073 start.go:246] waiting for cluster config update ...
	I0829 19:40:03.544789   79073 start.go:255] writing updated cluster config ...
	I0829 19:40:03.545136   79073 ssh_runner.go:195] Run: rm -f paused
	I0829 19:40:03.609413   79073 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:40:03.611490   79073 out.go:177] * Done! kubectl is now configured to use "embed-certs-920571" cluster and "default" namespace by default
	I0829 19:40:01.236210   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:03.236420   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:05.237141   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:06.707891   79559 kubeadm.go:310] [api-check] The API server is healthy after 5.002523987s
	I0829 19:40:06.719470   79559 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:40:06.733886   79559 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:40:06.759672   79559 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:40:06.759933   79559 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-672127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:40:06.771514   79559 kubeadm.go:310] [bootstrap-token] Using token: fzav4x.eeztheucmrep51py
	I0829 19:40:06.772887   79559 out.go:235]   - Configuring RBAC rules ...
	I0829 19:40:06.773014   79559 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:40:06.778644   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:40:06.792388   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:40:06.798560   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:40:06.801930   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:40:06.805767   79559 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:40:07.119680   79559 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:40:07.536660   79559 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:40:08.115528   79559 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:40:08.115550   79559 kubeadm.go:310] 
	I0829 19:40:08.115621   79559 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:40:08.115657   79559 kubeadm.go:310] 
	I0829 19:40:08.115780   79559 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:40:08.115802   79559 kubeadm.go:310] 
	I0829 19:40:08.115843   79559 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:40:08.115929   79559 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:40:08.116002   79559 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:40:08.116011   79559 kubeadm.go:310] 
	I0829 19:40:08.116087   79559 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:40:08.116099   79559 kubeadm.go:310] 
	I0829 19:40:08.116154   79559 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:40:08.116173   79559 kubeadm.go:310] 
	I0829 19:40:08.116247   79559 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:40:08.116386   79559 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:40:08.116477   79559 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:40:08.116487   79559 kubeadm.go:310] 
	I0829 19:40:08.116599   79559 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:40:08.116705   79559 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:40:08.116712   79559 kubeadm.go:310] 
	I0829 19:40:08.116779   79559 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token fzav4x.eeztheucmrep51py \
	I0829 19:40:08.116879   79559 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:40:08.116931   79559 kubeadm.go:310] 	--control-plane 
	I0829 19:40:08.116947   79559 kubeadm.go:310] 
	I0829 19:40:08.117048   79559 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:40:08.117058   79559 kubeadm.go:310] 
	I0829 19:40:08.117154   79559 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token fzav4x.eeztheucmrep51py \
	I0829 19:40:08.117270   79559 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:40:08.118512   79559 kubeadm.go:310] W0829 19:39:59.991394    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:08.118870   79559 kubeadm.go:310] W0829 19:39:59.992249    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:08.118981   79559 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:40:08.119009   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:40:08.119019   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:40:08.120832   79559 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:40:08.122029   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:40:08.133326   79559 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:40:08.150808   79559 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:40:08.150867   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:08.150884   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-672127 minikube.k8s.io/updated_at=2024_08_29T19_40_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=default-k8s-diff-port-672127 minikube.k8s.io/primary=true
	I0829 19:40:08.170047   79559 ops.go:34] apiserver oom_adj: -16
	I0829 19:40:08.350103   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:07.736119   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:10.236910   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:08.850762   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:09.350244   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:09.850222   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:10.350462   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:10.850237   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:11.350179   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:11.851033   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:12.351069   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:12.442963   79559 kubeadm.go:1113] duration metric: took 4.29215456s to wait for elevateKubeSystemPrivileges
	I0829 19:40:12.442998   79559 kubeadm.go:394] duration metric: took 4m56.544013459s to StartCluster
	I0829 19:40:12.443020   79559 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:40:12.443110   79559 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:40:12.444757   79559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:40:12.444998   79559 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:40:12.445061   79559 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:40:12.445138   79559 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445151   79559 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445173   79559 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.445181   79559 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:40:12.445179   79559 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-672127"
	I0829 19:40:12.445210   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.445210   79559 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:40:12.445266   79559 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445313   79559 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.445323   79559 addons.go:243] addon metrics-server should already be in state true
	I0829 19:40:12.445347   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.445625   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445658   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.445662   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445683   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.445737   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445775   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.446414   79559 out.go:177] * Verifying Kubernetes components...
	I0829 19:40:12.447652   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:40:12.461386   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0829 19:40:12.461436   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I0829 19:40:12.461805   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.461831   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.462057   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I0829 19:40:12.462324   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462327   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462341   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462347   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462373   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.462701   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.462798   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.462807   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462817   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462886   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.463109   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.463360   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.463392   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.463586   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.463607   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.465961   79559 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.465971   79559 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:40:12.465991   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.466309   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.466355   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.480989   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43705
	I0829 19:40:12.481216   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44477
	I0829 19:40:12.481407   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.481639   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.481843   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.481858   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.482222   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.482249   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.482291   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.482440   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.482576   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.482745   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.484681   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.485336   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.485664   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0829 19:40:12.486377   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.486547   79559 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:40:12.486922   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.486945   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.487310   79559 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:40:12.487586   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.488042   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:40:12.488060   79559 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:40:12.488081   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.488266   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.488307   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.488874   79559 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:40:12.488897   79559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:40:12.488914   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.492291   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.492699   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.492814   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.492844   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.493059   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.493128   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.493144   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.493259   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.493300   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.493432   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.493471   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.493822   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.493972   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.494114   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.505220   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0829 19:40:12.505690   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.506337   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.506363   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.506727   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.506899   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.508602   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.508796   79559 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:40:12.508810   79559 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:40:12.508829   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.511310   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.511660   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.511691   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.511815   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.511969   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.512110   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.512253   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.642279   79559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:40:12.666598   79559 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-672127" to be "Ready" ...
	I0829 19:40:12.682873   79559 node_ready.go:49] node "default-k8s-diff-port-672127" has status "Ready":"True"
	I0829 19:40:12.682895   79559 node_ready.go:38] duration metric: took 16.267143ms for node "default-k8s-diff-port-672127" to be "Ready" ...
	I0829 19:40:12.682904   79559 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:12.693451   79559 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:12.736525   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:40:12.736548   79559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:40:12.754764   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:40:12.754786   79559 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:40:12.806826   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:40:12.806856   79559 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:40:12.817164   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:40:12.837896   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:40:12.903140   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:40:14.124266   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.307063383s)
	I0829 19:40:14.124305   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.286373382s)
	I0829 19:40:14.124324   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124343   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124368   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124430   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.221258684s)
	I0829 19:40:14.124473   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124487   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124635   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124649   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124659   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124667   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124794   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.124813   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.124831   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124848   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124856   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124873   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124864   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124882   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124896   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124902   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124913   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124935   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.125356   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.125359   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.125381   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.126568   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.126637   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.126656   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.126704   79559 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-672127"
	I0829 19:40:14.193216   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.193238   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.193544   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.193562   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.195467   79559 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0829 19:40:12.237641   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:14.736679   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:14.196698   79559 addons.go:510] duration metric: took 1.751639165s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0829 19:40:14.720042   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:17.199482   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:17.235908   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.735901   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.199705   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.699776   79559 pod_ready.go:93] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:19.699801   79559 pod_ready.go:82] duration metric: took 7.006327617s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.699810   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.704240   79559 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:19.704261   79559 pod_ready.go:82] duration metric: took 4.444744ms for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.704269   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.710740   79559 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.710761   79559 pod_ready.go:82] duration metric: took 2.006486043s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.710770   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nqbn4" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.715111   79559 pod_ready.go:93] pod "kube-proxy-nqbn4" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.715134   79559 pod_ready.go:82] duration metric: took 4.357535ms for pod "kube-proxy-nqbn4" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.715146   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.719192   79559 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.719207   79559 pod_ready.go:82] duration metric: took 4.054087ms for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.719222   79559 pod_ready.go:39] duration metric: took 9.036299009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:21.719234   79559 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:40:21.719289   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:40:21.734507   79559 api_server.go:72] duration metric: took 9.289477227s to wait for apiserver process to appear ...
	I0829 19:40:21.734531   79559 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:40:21.734555   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:40:21.739963   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 200:
	ok
	I0829 19:40:21.740847   79559 api_server.go:141] control plane version: v1.31.0
	I0829 19:40:21.740865   79559 api_server.go:131] duration metric: took 6.327694ms to wait for apiserver health ...
	I0829 19:40:21.740872   79559 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:40:21.747609   79559 system_pods.go:59] 9 kube-system pods found
	I0829 19:40:21.747636   79559 system_pods.go:61] "coredns-6f6b679f8f-5p2vn" [8f7749c2-4cb3-4372-8144-46109f9b89b7] Running
	I0829 19:40:21.747643   79559 system_pods.go:61] "coredns-6f6b679f8f-dxbt5" [84373054-e72e-469a-bf2f-101943117851] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:21.747648   79559 system_pods.go:61] "etcd-default-k8s-diff-port-672127" [aacacabd-96ed-4635-b550-0bff36ec6c36] Running
	I0829 19:40:21.747654   79559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-672127" [6cf0f710-c339-438e-a572-2e8c498fa63a] Running
	I0829 19:40:21.747659   79559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-672127" [aaa934bc-c350-4318-9246-a2bb4b7d77f1] Running
	I0829 19:40:21.747662   79559 system_pods.go:61] "kube-proxy-nqbn4" [c5b48a1f-725b-45b7-8a3f-0df0f3371d2f] Running
	I0829 19:40:21.747665   79559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-672127" [fcf1ef19-acf4-41f2-a288-cfa8f978961b] Running
	I0829 19:40:21.747670   79559 system_pods.go:61] "metrics-server-6867b74b74-4p8qr" [8026c5c8-9f02-45a1-8cc8-9d485dc49cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:21.747674   79559 system_pods.go:61] "storage-provisioner" [5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00] Running
	I0829 19:40:21.747680   79559 system_pods.go:74] duration metric: took 6.803459ms to wait for pod list to return data ...
	I0829 19:40:21.747689   79559 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:40:21.750153   79559 default_sa.go:45] found service account: "default"
	I0829 19:40:21.750168   79559 default_sa.go:55] duration metric: took 2.474593ms for default service account to be created ...
	I0829 19:40:21.750175   79559 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:40:21.901186   79559 system_pods.go:86] 9 kube-system pods found
	I0829 19:40:21.901213   79559 system_pods.go:89] "coredns-6f6b679f8f-5p2vn" [8f7749c2-4cb3-4372-8144-46109f9b89b7] Running
	I0829 19:40:21.901219   79559 system_pods.go:89] "coredns-6f6b679f8f-dxbt5" [84373054-e72e-469a-bf2f-101943117851] Running
	I0829 19:40:21.901222   79559 system_pods.go:89] "etcd-default-k8s-diff-port-672127" [aacacabd-96ed-4635-b550-0bff36ec6c36] Running
	I0829 19:40:21.901227   79559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-672127" [6cf0f710-c339-438e-a572-2e8c498fa63a] Running
	I0829 19:40:21.901231   79559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-672127" [aaa934bc-c350-4318-9246-a2bb4b7d77f1] Running
	I0829 19:40:21.901235   79559 system_pods.go:89] "kube-proxy-nqbn4" [c5b48a1f-725b-45b7-8a3f-0df0f3371d2f] Running
	I0829 19:40:21.901238   79559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-672127" [fcf1ef19-acf4-41f2-a288-cfa8f978961b] Running
	I0829 19:40:21.901245   79559 system_pods.go:89] "metrics-server-6867b74b74-4p8qr" [8026c5c8-9f02-45a1-8cc8-9d485dc49cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:21.901249   79559 system_pods.go:89] "storage-provisioner" [5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00] Running
	I0829 19:40:21.901257   79559 system_pods.go:126] duration metric: took 151.07798ms to wait for k8s-apps to be running ...
	I0829 19:40:21.901263   79559 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:40:21.901306   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:21.916730   79559 system_svc.go:56] duration metric: took 15.457902ms WaitForService to wait for kubelet
	I0829 19:40:21.916757   79559 kubeadm.go:582] duration metric: took 9.471732105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:40:21.916773   79559 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:40:22.099083   79559 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:40:22.099119   79559 node_conditions.go:123] node cpu capacity is 2
	I0829 19:40:22.099133   79559 node_conditions.go:105] duration metric: took 182.354927ms to run NodePressure ...
	I0829 19:40:22.099147   79559 start.go:241] waiting for startup goroutines ...
	I0829 19:40:22.099156   79559 start.go:246] waiting for cluster config update ...
	I0829 19:40:22.099168   79559 start.go:255] writing updated cluster config ...
	I0829 19:40:22.099536   79559 ssh_runner.go:195] Run: rm -f paused
	I0829 19:40:22.148307   79559 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:40:22.150361   79559 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-672127" cluster and "default" namespace by default
	I0829 19:40:21.736121   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:23.229905   78865 pod_ready.go:82] duration metric: took 4m0.000141946s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" ...
	E0829 19:40:23.229943   78865 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:40:23.229991   78865 pod_ready.go:39] duration metric: took 4m10.70989222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:23.230021   78865 kubeadm.go:597] duration metric: took 4m18.600330645s to restartPrimaryControlPlane
	W0829 19:40:23.230078   78865 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:40:23.230136   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:40:25.762989   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:40:25.763689   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:25.763863   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:30.764613   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:30.764821   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:40.765517   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:40.765752   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:49.374221   78865 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.144057875s)
	I0829 19:40:49.374297   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:49.389586   78865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:40:49.399146   78865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:40:49.408450   78865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:40:49.408469   78865 kubeadm.go:157] found existing configuration files:
	
	I0829 19:40:49.408521   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:40:49.417651   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:40:49.417706   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:40:49.427073   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:40:49.435307   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:40:49.435356   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:40:49.443720   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:40:49.452437   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:40:49.452493   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:40:49.461133   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:40:49.469515   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:40:49.469564   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:40:49.478224   78865 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:40:49.523193   78865 kubeadm.go:310] W0829 19:40:49.504457    3026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:49.523801   78865 kubeadm.go:310] W0829 19:40:49.505165    3026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:49.640221   78865 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:40:57.429227   78865 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:40:57.429293   78865 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:40:57.429396   78865 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:40:57.429536   78865 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:40:57.429665   78865 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:40:57.429757   78865 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:40:57.431358   78865 out.go:235]   - Generating certificates and keys ...
	I0829 19:40:57.431434   78865 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:40:57.431485   78865 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:40:57.431566   78865 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:40:57.431640   78865 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:40:57.431711   78865 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:40:57.431786   78865 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:40:57.431847   78865 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:40:57.431893   78865 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:40:57.431956   78865 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:40:57.432013   78865 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:40:57.432052   78865 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:40:57.432109   78865 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:40:57.432186   78865 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:40:57.432275   78865 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:40:57.432352   78865 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:40:57.432444   78865 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:40:57.432518   78865 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:40:57.432595   78865 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:40:57.432648   78865 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:40:57.434057   78865 out.go:235]   - Booting up control plane ...
	I0829 19:40:57.434161   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:40:57.434245   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:40:57.434298   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:40:57.434396   78865 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:40:57.434475   78865 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:40:57.434509   78865 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:40:57.434687   78865 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:40:57.434772   78865 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:40:57.434824   78865 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 509.075612ms
	I0829 19:40:57.434887   78865 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:40:57.434932   78865 kubeadm.go:310] [api-check] The API server is healthy after 5.002117161s
	I0829 19:40:57.435094   78865 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:40:57.435232   78865 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:40:57.435284   78865 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:40:57.435429   78865 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-690795 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:40:57.435472   78865 kubeadm.go:310] [bootstrap-token] Using token: adxyev.rcmf9k5ok190h0g1
	I0829 19:40:57.436846   78865 out.go:235]   - Configuring RBAC rules ...
	I0829 19:40:57.436936   78865 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:40:57.437001   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:40:57.437113   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:40:57.437214   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:40:57.437307   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:40:57.437380   78865 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:40:57.437480   78865 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:40:57.437528   78865 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:40:57.437577   78865 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:40:57.437583   78865 kubeadm.go:310] 
	I0829 19:40:57.437635   78865 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:40:57.437641   78865 kubeadm.go:310] 
	I0829 19:40:57.437704   78865 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:40:57.437710   78865 kubeadm.go:310] 
	I0829 19:40:57.437744   78865 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:40:57.437807   78865 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:40:57.437851   78865 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:40:57.437857   78865 kubeadm.go:310] 
	I0829 19:40:57.437907   78865 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:40:57.437913   78865 kubeadm.go:310] 
	I0829 19:40:57.437951   78865 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:40:57.437957   78865 kubeadm.go:310] 
	I0829 19:40:57.438000   78865 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:40:57.438107   78865 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:40:57.438188   78865 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:40:57.438200   78865 kubeadm.go:310] 
	I0829 19:40:57.438289   78865 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:40:57.438359   78865 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:40:57.438364   78865 kubeadm.go:310] 
	I0829 19:40:57.438429   78865 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token adxyev.rcmf9k5ok190h0g1 \
	I0829 19:40:57.438507   78865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:40:57.438525   78865 kubeadm.go:310] 	--control-plane 
	I0829 19:40:57.438534   78865 kubeadm.go:310] 
	I0829 19:40:57.438611   78865 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:40:57.438621   78865 kubeadm.go:310] 
	I0829 19:40:57.438688   78865 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token adxyev.rcmf9k5ok190h0g1 \
	I0829 19:40:57.438791   78865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:40:57.438814   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:40:57.438825   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:40:57.440836   78865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:40:57.442065   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:40:57.452700   78865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:40:57.469549   78865 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:40:57.469621   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:57.469656   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-690795 minikube.k8s.io/updated_at=2024_08_29T19_40_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=no-preload-690795 minikube.k8s.io/primary=true
	I0829 19:40:57.503411   78865 ops.go:34] apiserver oom_adj: -16
	I0829 19:40:57.648807   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:58.149067   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:58.649770   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:59.148932   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:59.649114   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:00.149833   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:00.649474   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.149795   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.649154   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.745084   78865 kubeadm.go:1113] duration metric: took 4.275525047s to wait for elevateKubeSystemPrivileges
	I0829 19:41:01.745117   78865 kubeadm.go:394] duration metric: took 4m57.169926854s to StartCluster
	I0829 19:41:01.745134   78865 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:41:01.745209   78865 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:41:01.746775   78865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:41:01.747005   78865 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:41:01.747062   78865 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:41:01.747155   78865 addons.go:69] Setting storage-provisioner=true in profile "no-preload-690795"
	I0829 19:41:01.747175   78865 addons.go:69] Setting default-storageclass=true in profile "no-preload-690795"
	I0829 19:41:01.747189   78865 addons.go:234] Setting addon storage-provisioner=true in "no-preload-690795"
	W0829 19:41:01.747199   78865 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:41:01.747200   78865 addons.go:69] Setting metrics-server=true in profile "no-preload-690795"
	I0829 19:41:01.747240   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.747246   78865 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:41:01.747243   78865 addons.go:234] Setting addon metrics-server=true in "no-preload-690795"
	W0829 19:41:01.747307   78865 addons.go:243] addon metrics-server should already be in state true
	I0829 19:41:01.747333   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.747206   78865 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-690795"
	I0829 19:41:01.747652   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747670   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747678   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.747702   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.747780   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747810   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.748790   78865 out.go:177] * Verifying Kubernetes components...
	I0829 19:41:01.750069   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:41:01.764006   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0829 19:41:01.765511   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.766194   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.766218   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.766287   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43301
	I0829 19:41:01.766670   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.766694   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.766912   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.766965   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I0829 19:41:01.767129   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.767149   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.767304   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.767506   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.767737   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.767755   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.768073   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.768202   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.768241   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.768615   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.768646   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.771065   78865 addons.go:234] Setting addon default-storageclass=true in "no-preload-690795"
	W0829 19:41:01.771088   78865 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:41:01.771117   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.771415   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.771441   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.787271   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I0829 19:41:01.788003   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.788577   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.788606   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.788885   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0829 19:41:01.789065   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0829 19:41:01.789073   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.789361   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.789716   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.789754   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.789754   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.789774   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.790084   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.790243   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.790319   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.791018   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.791029   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.791393   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.791721   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.792306   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.793557   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.793806   78865 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:41:01.794942   78865 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:41:01.795033   78865 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:41:01.795049   78865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:41:01.795067   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.796032   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:41:01.796048   78865 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:41:01.796065   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.799646   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800163   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800618   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.800644   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800826   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.800843   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800941   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.801043   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.801114   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.801184   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.801239   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.801363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.801367   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.801484   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.807187   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36385
	I0829 19:41:01.807604   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.808056   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.808070   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.808471   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.808671   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.810374   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.810569   78865 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:41:01.810579   78865 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:41:01.810591   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.813314   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.813766   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.813776   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.814029   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.814187   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.814292   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.814379   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.963011   78865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:41:01.981935   78865 node_ready.go:35] waiting up to 6m0s for node "no-preload-690795" to be "Ready" ...
	I0829 19:41:01.998366   78865 node_ready.go:49] node "no-preload-690795" has status "Ready":"True"
	I0829 19:41:01.998389   78865 node_ready.go:38] duration metric: took 16.418591ms for node "no-preload-690795" to be "Ready" ...
	I0829 19:41:01.998398   78865 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:41:02.005811   78865 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:02.053495   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:41:02.197657   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:41:02.239853   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:41:02.239877   78865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:41:02.270764   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:41:02.270789   78865 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:41:02.327819   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:41:02.327853   78865 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:41:02.380812   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.380843   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.381117   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.381191   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:02.381209   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.381217   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.381432   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.381444   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:02.384211   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:41:02.387013   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.387027   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.387286   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:02.387333   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.387345   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.027502   78865 pod_ready.go:93] pod "etcd-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:03.027535   78865 pod_ready.go:82] duration metric: took 1.02170157s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:03.027550   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:03.410428   78865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212715771s)
	I0829 19:41:03.410485   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.410503   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.412586   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.412590   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.412614   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.412625   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.412632   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.412926   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.412947   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.412954   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.587379   78865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.203116606s)
	I0829 19:41:03.587437   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.587452   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.587770   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.587840   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.587859   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.587874   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.587878   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.588185   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.588206   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.588218   78865 addons.go:475] Verifying addon metrics-server=true in "no-preload-690795"
	I0829 19:41:03.588192   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.590131   78865 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 19:41:00.767158   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:00.767429   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:03.591280   78865 addons.go:510] duration metric: took 1.844219817s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 19:41:05.035315   78865 pod_ready.go:103] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"False"
	I0829 19:41:06.033037   78865 pod_ready.go:93] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:06.033060   78865 pod_ready.go:82] duration metric: took 3.005501862s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:06.033068   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.039035   78865 pod_ready.go:93] pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.039059   78865 pod_ready.go:82] duration metric: took 1.005984859s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.039068   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p7zvh" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.043096   78865 pod_ready.go:93] pod "kube-proxy-p7zvh" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.043116   78865 pod_ready.go:82] duration metric: took 4.042896ms for pod "kube-proxy-p7zvh" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.043125   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.046934   78865 pod_ready.go:93] pod "kube-scheduler-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.046957   78865 pod_ready.go:82] duration metric: took 3.826283ms for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.046966   78865 pod_ready.go:39] duration metric: took 5.048560252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:41:07.046983   78865 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:41:07.047036   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:41:07.062234   78865 api_server.go:72] duration metric: took 5.315200823s to wait for apiserver process to appear ...
	I0829 19:41:07.062256   78865 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:41:07.062277   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:41:07.068022   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0829 19:41:07.069170   78865 api_server.go:141] control plane version: v1.31.0
	I0829 19:41:07.069190   78865 api_server.go:131] duration metric: took 6.927858ms to wait for apiserver health ...
	I0829 19:41:07.069198   78865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:41:07.075909   78865 system_pods.go:59] 9 kube-system pods found
	I0829 19:41:07.075932   78865 system_pods.go:61] "coredns-6f6b679f8f-wr7bq" [ac054ab5-3a0e-433e-add6-5817ce6f1c27] Running
	I0829 19:41:07.075939   78865 system_pods.go:61] "coredns-6f6b679f8f-xbfb6" [a94d281f-1fdb-4e33-a060-17cd5981462c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:41:07.075944   78865 system_pods.go:61] "etcd-no-preload-690795" [a13c0d5b-fb99-4c57-8401-b0580a0e97a7] Running
	I0829 19:41:07.075949   78865 system_pods.go:61] "kube-apiserver-no-preload-690795" [1ec66a25-5b6d-4b81-8c44-b4dffad1ec12] Running
	I0829 19:41:07.075953   78865 system_pods.go:61] "kube-controller-manager-no-preload-690795" [d670e6eb-006f-4fce-a72f-f266fda72ccc] Running
	I0829 19:41:07.075956   78865 system_pods.go:61] "kube-proxy-p7zvh" [14f4576d-3d3e-4848-9350-3348293318aa] Running
	I0829 19:41:07.075960   78865 system_pods.go:61] "kube-scheduler-no-preload-690795" [1b4dfa7f-b043-4a38-a250-2e51eabf1b33] Running
	I0829 19:41:07.075964   78865 system_pods.go:61] "metrics-server-6867b74b74-shs88" [cd53f408-7f8a-40ae-93f3-7a00c8ae6646] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:41:07.075968   78865 system_pods.go:61] "storage-provisioner" [df10c563-06d8-48f8-a6e4-35837195a25d] Running
	I0829 19:41:07.075975   78865 system_pods.go:74] duration metric: took 6.771333ms to wait for pod list to return data ...
	I0829 19:41:07.075985   78865 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:41:07.079235   78865 default_sa.go:45] found service account: "default"
	I0829 19:41:07.079255   78865 default_sa.go:55] duration metric: took 3.264804ms for default service account to be created ...
	I0829 19:41:07.079263   78865 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:41:07.083981   78865 system_pods.go:86] 9 kube-system pods found
	I0829 19:41:07.084006   78865 system_pods.go:89] "coredns-6f6b679f8f-wr7bq" [ac054ab5-3a0e-433e-add6-5817ce6f1c27] Running
	I0829 19:41:07.084014   78865 system_pods.go:89] "coredns-6f6b679f8f-xbfb6" [a94d281f-1fdb-4e33-a060-17cd5981462c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:41:07.084019   78865 system_pods.go:89] "etcd-no-preload-690795" [a13c0d5b-fb99-4c57-8401-b0580a0e97a7] Running
	I0829 19:41:07.084025   78865 system_pods.go:89] "kube-apiserver-no-preload-690795" [1ec66a25-5b6d-4b81-8c44-b4dffad1ec12] Running
	I0829 19:41:07.084029   78865 system_pods.go:89] "kube-controller-manager-no-preload-690795" [d670e6eb-006f-4fce-a72f-f266fda72ccc] Running
	I0829 19:41:07.084032   78865 system_pods.go:89] "kube-proxy-p7zvh" [14f4576d-3d3e-4848-9350-3348293318aa] Running
	I0829 19:41:07.084037   78865 system_pods.go:89] "kube-scheduler-no-preload-690795" [1b4dfa7f-b043-4a38-a250-2e51eabf1b33] Running
	I0829 19:41:07.084042   78865 system_pods.go:89] "metrics-server-6867b74b74-shs88" [cd53f408-7f8a-40ae-93f3-7a00c8ae6646] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:41:07.084045   78865 system_pods.go:89] "storage-provisioner" [df10c563-06d8-48f8-a6e4-35837195a25d] Running
	I0829 19:41:07.084052   78865 system_pods.go:126] duration metric: took 4.784448ms to wait for k8s-apps to be running ...
	I0829 19:41:07.084062   78865 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:41:07.084104   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:07.098513   78865 system_svc.go:56] duration metric: took 14.440998ms WaitForService to wait for kubelet
	I0829 19:41:07.098551   78865 kubeadm.go:582] duration metric: took 5.351518255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:41:07.098574   78865 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:41:07.231160   78865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:41:07.231189   78865 node_conditions.go:123] node cpu capacity is 2
	I0829 19:41:07.231200   78865 node_conditions.go:105] duration metric: took 132.62068ms to run NodePressure ...
	I0829 19:41:07.231209   78865 start.go:241] waiting for startup goroutines ...
	I0829 19:41:07.231216   78865 start.go:246] waiting for cluster config update ...
	I0829 19:41:07.231225   78865 start.go:255] writing updated cluster config ...
	I0829 19:41:07.231503   78865 ssh_runner.go:195] Run: rm -f paused
	I0829 19:41:07.283204   78865 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:41:07.284751   78865 out.go:177] * Done! kubectl is now configured to use "no-preload-690795" cluster and "default" namespace by default
	I0829 19:41:40.770350   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:40.770652   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:40.770684   79869 kubeadm.go:310] 
	I0829 19:41:40.770740   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:41:40.770802   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:41:40.770818   79869 kubeadm.go:310] 
	I0829 19:41:40.770862   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:41:40.770917   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:41:40.771047   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:41:40.771057   79869 kubeadm.go:310] 
	I0829 19:41:40.771202   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:41:40.771254   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:41:40.771309   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:41:40.771320   79869 kubeadm.go:310] 
	I0829 19:41:40.771447   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:41:40.771565   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:41:40.771576   79869 kubeadm.go:310] 
	I0829 19:41:40.771675   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:41:40.771776   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:41:40.771900   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:41:40.771997   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:41:40.772010   79869 kubeadm.go:310] 
	I0829 19:41:40.772984   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:41:40.773093   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:41:40.773213   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 19:41:40.773353   79869 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 19:41:40.773398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:41:41.224263   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:41.239310   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:41:41.249121   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:41:41.249142   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:41:41.249195   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:41:41.258534   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:41:41.258591   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:41:41.267814   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:41:41.276813   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:41:41.276871   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:41:41.286937   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.296364   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:41:41.296435   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.306574   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:41:41.315824   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:41:41.315899   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:41:41.325290   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:41:41.389915   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:41:41.390071   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:41:41.529956   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:41:41.530108   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:41:41.530226   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:41:41.709310   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:41:41.711945   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:41:41.712051   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:41:41.712127   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:41:41.712225   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:41:41.712308   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:41:41.712402   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:41:41.712466   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:41:41.712551   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:41:41.712622   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:41:41.712727   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:41:41.712831   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:41:41.712865   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:41:41.712912   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:41:41.790778   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:41:41.993240   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:41:42.180389   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:41:42.248561   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:41:42.272297   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:41:42.273147   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:41:42.273249   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:41:42.421783   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:41:42.424669   79869 out.go:235]   - Booting up control plane ...
	I0829 19:41:42.424781   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:41:42.434145   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:41:42.437026   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:41:42.437823   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:41:42.441047   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:42:22.439545   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:42:22.439898   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:22.440093   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:27.439985   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:27.440226   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:37.440067   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:37.440333   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:57.439710   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:57.439891   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.439862   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:43:37.440057   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.440081   79869 kubeadm.go:310] 
	I0829 19:43:37.440118   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:43:37.440173   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:43:37.440181   79869 kubeadm.go:310] 
	I0829 19:43:37.440213   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:43:37.440265   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:43:37.440376   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:43:37.440384   79869 kubeadm.go:310] 
	I0829 19:43:37.440503   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:43:37.440551   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:43:37.440605   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:43:37.440618   79869 kubeadm.go:310] 
	I0829 19:43:37.440763   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:43:37.440893   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:43:37.440904   79869 kubeadm.go:310] 
	I0829 19:43:37.441013   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:43:37.441146   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:43:37.441255   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:43:37.441367   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:43:37.441380   79869 kubeadm.go:310] 
	I0829 19:43:37.441848   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:43:37.441958   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:43:37.442039   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 19:43:37.442126   79869 kubeadm.go:394] duration metric: took 8m1.388269811s to StartCluster
	I0829 19:43:37.442174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:43:37.442230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:43:37.483512   79869 cri.go:89] found id: ""
	I0829 19:43:37.483544   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.483554   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:43:37.483560   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:43:37.483617   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:43:37.518325   79869 cri.go:89] found id: ""
	I0829 19:43:37.518353   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.518361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:43:37.518368   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:43:37.518426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:43:37.554541   79869 cri.go:89] found id: ""
	I0829 19:43:37.554563   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.554574   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:43:37.554582   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:43:37.554650   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:43:37.589041   79869 cri.go:89] found id: ""
	I0829 19:43:37.589069   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.589076   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:43:37.589083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:43:37.589132   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:43:37.624451   79869 cri.go:89] found id: ""
	I0829 19:43:37.624479   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.624491   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:43:37.624499   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:43:37.624554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:43:37.660162   79869 cri.go:89] found id: ""
	I0829 19:43:37.660186   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.660193   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:43:37.660199   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:43:37.660249   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:43:37.696806   79869 cri.go:89] found id: ""
	I0829 19:43:37.696836   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.696844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:43:37.696850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:43:37.696898   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:43:37.732828   79869 cri.go:89] found id: ""
	I0829 19:43:37.732851   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.732860   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:43:37.732871   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:43:37.732887   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:43:37.772219   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:43:37.772247   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:43:37.823967   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:43:37.824003   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:43:37.838884   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:43:37.838906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:43:37.915184   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:43:37.915206   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:43:37.915222   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0829 19:43:38.020759   79869 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 19:43:38.020827   79869 out.go:270] * 
	W0829 19:43:38.020882   79869 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.020897   79869 out.go:270] * 
	W0829 19:43:38.021777   79869 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:43:38.024855   79869 out.go:201] 
	W0829 19:43:38.025860   79869 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.025905   79869 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 19:43:38.025936   79869 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 19:43:38.027175   79869 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.114294590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960964114270694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d79ba9ff-bf6d-40b0-bdbd-09009289b448 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.114708805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c86e292-eaca-40e0-9aaf-397e4352c5d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.114773295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c86e292-eaca-40e0-9aaf-397e4352c5d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.115008447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:618b4f781c25c0bae32f14d563f92b13c00f2f8ba3cb26883d763e52b32aa53a,PodSandboxId:fd8c012e46279448c931f423b7c0a3edc3c50acd52330a1662a5e1fbda7f2d21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960414537805680,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cee7be91cef6a4df95a6f30e1245ac788cdf5b471cc32cf8e6c534b463530dc4,PodSandboxId:f5c11184c99c8cd96632f007ad2b6681d1781872c599eb669f69cea3d2db1ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960414000354059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dxbt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84373054-e72e-469a-bf2f-101943117851,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2381ec99fe28ee5575c33d65237e0e562a5a6ea70fbcc8da25e91c230b77cee9,PodSandboxId:b95b02a1645a8a880ea12c901e2ae926652ee0b858e60387e7814bb1bbcdc516,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960413932382183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5p2vn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8f7749c2-4cb3-4372-8144-46109f9b89b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:046b89ea511cf12db05b7fa3ffd5bef78b13fe226543cc3e898fac1885518f19,PodSandboxId:c151165840d433405994ae76c44ab066d8587d2fef8d08bb7ae04099359e6b87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724960413117330203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqbn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b48a1f-725b-45b7-8a3f-0df0f3371d2f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13be5321a8c80c0020d830f57402d3a50b64f997dca0b352f757d34343265afa,PodSandboxId:9035323cb17ef464700e3e048a60728bebc848c7e0a0e9ab6728f05cc9b1e490,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960402364816845
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e975e1c07c22e3743cd74281083965,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426978e9357aa278ce81b3fee9d9f96b0a2cd12753daf1617d02becfc623cbb2,PodSandboxId:816f69dd8790e2dbf99e15b68637854971abfda4bcb97800f0397dc9414a0134,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172496040234
1690608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c5943896148d6b6042e7091fd9bb931,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e7abefb26ad39be0f3ebe8168138685a191ae8c59b8d277d341d9c157138f2,PodSandboxId:c8760b4447461beff647cfc37fff2080daffa0b27d206b8edb07279585f8e23a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172496
0402338040450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e12510189d6529718bce6143f8cb7f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9dc4baf0d69e3869f713b211d9f67465b96abe843f5578b9be3ebb9b8f0126,PodSandboxId:c5e190c693572af8beafaaa3d5eabece379ea814116846d388f8f8c76533ae93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960402320080199,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c253f2be8a7770ae53a7ebe3387b8b9ffac2c47439aab92c182013ed3f9266,PodSandboxId:54135f9d2371b0dabaf8853e269037ee4dd3572c016f276a4863e20e9593559c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960118921682055,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c86e292-eaca-40e0-9aaf-397e4352c5d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.150259013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1279275c-1ebd-4272-ab24-82da328cd68a name=/runtime.v1.RuntimeService/Version
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.150344310Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1279275c-1ebd-4272-ab24-82da328cd68a name=/runtime.v1.RuntimeService/Version
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.151997642Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9338131-0f83-4ec6-8eca-4297b7694e22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.152514657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960964152489982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9338131-0f83-4ec6-8eca-4297b7694e22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.152956857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58419e1b-43f4-4d7f-8993-4c3616404f60 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.153020901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58419e1b-43f4-4d7f-8993-4c3616404f60 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.153215925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:618b4f781c25c0bae32f14d563f92b13c00f2f8ba3cb26883d763e52b32aa53a,PodSandboxId:fd8c012e46279448c931f423b7c0a3edc3c50acd52330a1662a5e1fbda7f2d21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960414537805680,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cee7be91cef6a4df95a6f30e1245ac788cdf5b471cc32cf8e6c534b463530dc4,PodSandboxId:f5c11184c99c8cd96632f007ad2b6681d1781872c599eb669f69cea3d2db1ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960414000354059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dxbt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84373054-e72e-469a-bf2f-101943117851,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2381ec99fe28ee5575c33d65237e0e562a5a6ea70fbcc8da25e91c230b77cee9,PodSandboxId:b95b02a1645a8a880ea12c901e2ae926652ee0b858e60387e7814bb1bbcdc516,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960413932382183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5p2vn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8f7749c2-4cb3-4372-8144-46109f9b89b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:046b89ea511cf12db05b7fa3ffd5bef78b13fe226543cc3e898fac1885518f19,PodSandboxId:c151165840d433405994ae76c44ab066d8587d2fef8d08bb7ae04099359e6b87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724960413117330203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqbn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b48a1f-725b-45b7-8a3f-0df0f3371d2f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13be5321a8c80c0020d830f57402d3a50b64f997dca0b352f757d34343265afa,PodSandboxId:9035323cb17ef464700e3e048a60728bebc848c7e0a0e9ab6728f05cc9b1e490,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960402364816845
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e975e1c07c22e3743cd74281083965,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426978e9357aa278ce81b3fee9d9f96b0a2cd12753daf1617d02becfc623cbb2,PodSandboxId:816f69dd8790e2dbf99e15b68637854971abfda4bcb97800f0397dc9414a0134,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172496040234
1690608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c5943896148d6b6042e7091fd9bb931,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e7abefb26ad39be0f3ebe8168138685a191ae8c59b8d277d341d9c157138f2,PodSandboxId:c8760b4447461beff647cfc37fff2080daffa0b27d206b8edb07279585f8e23a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172496
0402338040450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e12510189d6529718bce6143f8cb7f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9dc4baf0d69e3869f713b211d9f67465b96abe843f5578b9be3ebb9b8f0126,PodSandboxId:c5e190c693572af8beafaaa3d5eabece379ea814116846d388f8f8c76533ae93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960402320080199,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c253f2be8a7770ae53a7ebe3387b8b9ffac2c47439aab92c182013ed3f9266,PodSandboxId:54135f9d2371b0dabaf8853e269037ee4dd3572c016f276a4863e20e9593559c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960118921682055,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58419e1b-43f4-4d7f-8993-4c3616404f60 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.189151551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=945fa897-931a-4b43-9c09-46a60709823c name=/runtime.v1.RuntimeService/Version
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.189238197Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=945fa897-931a-4b43-9c09-46a60709823c name=/runtime.v1.RuntimeService/Version
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.193715622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=640462b2-241f-4203-9c8f-a7105e24d9db name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.194175427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960964194152989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=640462b2-241f-4203-9c8f-a7105e24d9db name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.194686572Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a678672-60ef-4ef6-bd7d-d743817811ff name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.194834771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a678672-60ef-4ef6-bd7d-d743817811ff name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.195259037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:618b4f781c25c0bae32f14d563f92b13c00f2f8ba3cb26883d763e52b32aa53a,PodSandboxId:fd8c012e46279448c931f423b7c0a3edc3c50acd52330a1662a5e1fbda7f2d21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960414537805680,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cee7be91cef6a4df95a6f30e1245ac788cdf5b471cc32cf8e6c534b463530dc4,PodSandboxId:f5c11184c99c8cd96632f007ad2b6681d1781872c599eb669f69cea3d2db1ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960414000354059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dxbt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84373054-e72e-469a-bf2f-101943117851,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2381ec99fe28ee5575c33d65237e0e562a5a6ea70fbcc8da25e91c230b77cee9,PodSandboxId:b95b02a1645a8a880ea12c901e2ae926652ee0b858e60387e7814bb1bbcdc516,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960413932382183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5p2vn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8f7749c2-4cb3-4372-8144-46109f9b89b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:046b89ea511cf12db05b7fa3ffd5bef78b13fe226543cc3e898fac1885518f19,PodSandboxId:c151165840d433405994ae76c44ab066d8587d2fef8d08bb7ae04099359e6b87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724960413117330203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqbn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b48a1f-725b-45b7-8a3f-0df0f3371d2f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13be5321a8c80c0020d830f57402d3a50b64f997dca0b352f757d34343265afa,PodSandboxId:9035323cb17ef464700e3e048a60728bebc848c7e0a0e9ab6728f05cc9b1e490,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960402364816845
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e975e1c07c22e3743cd74281083965,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426978e9357aa278ce81b3fee9d9f96b0a2cd12753daf1617d02becfc623cbb2,PodSandboxId:816f69dd8790e2dbf99e15b68637854971abfda4bcb97800f0397dc9414a0134,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172496040234
1690608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c5943896148d6b6042e7091fd9bb931,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e7abefb26ad39be0f3ebe8168138685a191ae8c59b8d277d341d9c157138f2,PodSandboxId:c8760b4447461beff647cfc37fff2080daffa0b27d206b8edb07279585f8e23a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172496
0402338040450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e12510189d6529718bce6143f8cb7f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9dc4baf0d69e3869f713b211d9f67465b96abe843f5578b9be3ebb9b8f0126,PodSandboxId:c5e190c693572af8beafaaa3d5eabece379ea814116846d388f8f8c76533ae93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960402320080199,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c253f2be8a7770ae53a7ebe3387b8b9ffac2c47439aab92c182013ed3f9266,PodSandboxId:54135f9d2371b0dabaf8853e269037ee4dd3572c016f276a4863e20e9593559c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960118921682055,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a678672-60ef-4ef6-bd7d-d743817811ff name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.230303155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1e6b92f-3eaf-4525-9e50-e24aa160fb63 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.230379151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1e6b92f-3eaf-4525-9e50-e24aa160fb63 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.234001425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=108991fe-4e6c-45cb-8195-e759ad014156 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.234379905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960964234355428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=108991fe-4e6c-45cb-8195-e759ad014156 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.235233470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac69daa8-27cb-4e28-ae23-cb74b2ffbcba name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.235416763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac69daa8-27cb-4e28-ae23-cb74b2ffbcba name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:49:24 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:49:24.235639045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:618b4f781c25c0bae32f14d563f92b13c00f2f8ba3cb26883d763e52b32aa53a,PodSandboxId:fd8c012e46279448c931f423b7c0a3edc3c50acd52330a1662a5e1fbda7f2d21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960414537805680,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cee7be91cef6a4df95a6f30e1245ac788cdf5b471cc32cf8e6c534b463530dc4,PodSandboxId:f5c11184c99c8cd96632f007ad2b6681d1781872c599eb669f69cea3d2db1ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960414000354059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dxbt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84373054-e72e-469a-bf2f-101943117851,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2381ec99fe28ee5575c33d65237e0e562a5a6ea70fbcc8da25e91c230b77cee9,PodSandboxId:b95b02a1645a8a880ea12c901e2ae926652ee0b858e60387e7814bb1bbcdc516,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960413932382183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5p2vn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8f7749c2-4cb3-4372-8144-46109f9b89b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:046b89ea511cf12db05b7fa3ffd5bef78b13fe226543cc3e898fac1885518f19,PodSandboxId:c151165840d433405994ae76c44ab066d8587d2fef8d08bb7ae04099359e6b87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724960413117330203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqbn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b48a1f-725b-45b7-8a3f-0df0f3371d2f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13be5321a8c80c0020d830f57402d3a50b64f997dca0b352f757d34343265afa,PodSandboxId:9035323cb17ef464700e3e048a60728bebc848c7e0a0e9ab6728f05cc9b1e490,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960402364816845
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e975e1c07c22e3743cd74281083965,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426978e9357aa278ce81b3fee9d9f96b0a2cd12753daf1617d02becfc623cbb2,PodSandboxId:816f69dd8790e2dbf99e15b68637854971abfda4bcb97800f0397dc9414a0134,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172496040234
1690608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c5943896148d6b6042e7091fd9bb931,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e7abefb26ad39be0f3ebe8168138685a191ae8c59b8d277d341d9c157138f2,PodSandboxId:c8760b4447461beff647cfc37fff2080daffa0b27d206b8edb07279585f8e23a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172496
0402338040450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e12510189d6529718bce6143f8cb7f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9dc4baf0d69e3869f713b211d9f67465b96abe843f5578b9be3ebb9b8f0126,PodSandboxId:c5e190c693572af8beafaaa3d5eabece379ea814116846d388f8f8c76533ae93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960402320080199,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c253f2be8a7770ae53a7ebe3387b8b9ffac2c47439aab92c182013ed3f9266,PodSandboxId:54135f9d2371b0dabaf8853e269037ee4dd3572c016f276a4863e20e9593559c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960118921682055,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac69daa8-27cb-4e28-ae23-cb74b2ffbcba name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	618b4f781c25c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   fd8c012e46279       storage-provisioner
	cee7be91cef6a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   f5c11184c99c8       coredns-6f6b679f8f-dxbt5
	2381ec99fe28e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   b95b02a1645a8       coredns-6f6b679f8f-5p2vn
	046b89ea511cf       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   c151165840d43       kube-proxy-nqbn4
	13be5321a8c80       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   9035323cb17ef       kube-scheduler-default-k8s-diff-port-672127
	426978e9357aa       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   816f69dd8790e       kube-controller-manager-default-k8s-diff-port-672127
	e5e7abefb26ad       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   c8760b4447461       etcd-default-k8s-diff-port-672127
	8e9dc4baf0d69       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   c5e190c693572       kube-apiserver-default-k8s-diff-port-672127
	21c253f2be8a7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   54135f9d2371b       kube-apiserver-default-k8s-diff-port-672127
	
	
	==> coredns [2381ec99fe28ee5575c33d65237e0e562a5a6ea70fbcc8da25e91c230b77cee9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [cee7be91cef6a4df95a6f30e1245ac788cdf5b471cc32cf8e6c534b463530dc4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-672127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-672127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=default-k8s-diff-port-672127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_40_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:40:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-672127
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:49:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:45:22 +0000   Thu, 29 Aug 2024 19:40:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:45:22 +0000   Thu, 29 Aug 2024 19:40:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:45:22 +0000   Thu, 29 Aug 2024 19:40:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:45:22 +0000   Thu, 29 Aug 2024 19:40:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.70
	  Hostname:    default-k8s-diff-port-672127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 24f919e2aeac4008a7f67717f493f871
	  System UUID:                24f919e2-aeac-4008-a7f6-7717f493f871
	  Boot ID:                    bd93af7b-a144-4151-8829-b1780c1e1219
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5p2vn                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-6f6b679f8f-dxbt5                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-default-k8s-diff-port-672127                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-672127             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-672127    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-nqbn4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-default-k8s-diff-port-672127             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-4p8qr                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node default-k8s-diff-port-672127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node default-k8s-diff-port-672127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node default-k8s-diff-port-672127 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node default-k8s-diff-port-672127 event: Registered Node default-k8s-diff-port-672127 in Controller
	
	
	==> dmesg <==
	[  +0.054744] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039908] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.835762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.933018] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Aug29 19:35] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.307917] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.062221] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071459] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.178657] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.149530] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.319149] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +4.080498] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +2.405199] systemd-fstab-generator[911]: Ignoring "noauto" option for root device
	[  +0.069002] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.018328] kauditd_printk_skb: 92 callbacks suppressed
	[  +6.567639] kauditd_printk_skb: 62 callbacks suppressed
	[Aug29 19:40] systemd-fstab-generator[2585]: Ignoring "noauto" option for root device
	[  +0.064460] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.995451] systemd-fstab-generator[2905]: Ignoring "noauto" option for root device
	[  +0.080138] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.262341] systemd-fstab-generator[3016]: Ignoring "noauto" option for root device
	[  +0.115783] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.237308] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [e5e7abefb26ad39be0f3ebe8168138685a191ae8c59b8d277d341d9c157138f2] <==
	{"level":"info","ts":"2024-08-29T19:40:02.717368Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T19:40:02.717671Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"755e1e1acc6a8bb3","initial-advertise-peer-urls":["https://192.168.50.70:2380"],"listen-peer-urls":["https://192.168.50.70:2380"],"advertise-client-urls":["https://192.168.50.70:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.70:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:40:02.717782Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:40:02.724087Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.70:2380"}
	{"level":"info","ts":"2024-08-29T19:40:02.724134Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.70:2380"}
	{"level":"info","ts":"2024-08-29T19:40:03.358906Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"755e1e1acc6a8bb3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-29T19:40:03.358983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"755e1e1acc6a8bb3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-29T19:40:03.359010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"755e1e1acc6a8bb3 received MsgPreVoteResp from 755e1e1acc6a8bb3 at term 1"}
	{"level":"info","ts":"2024-08-29T19:40:03.359024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"755e1e1acc6a8bb3 became candidate at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:03.359030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"755e1e1acc6a8bb3 received MsgVoteResp from 755e1e1acc6a8bb3 at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:03.359038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"755e1e1acc6a8bb3 became leader at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:03.359045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 755e1e1acc6a8bb3 elected leader 755e1e1acc6a8bb3 at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:03.363087Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:03.365194Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"755e1e1acc6a8bb3","local-member-attributes":"{Name:default-k8s-diff-port-672127 ClientURLs:[https://192.168.50.70:2379]}","request-path":"/0/members/755e1e1acc6a8bb3/attributes","cluster-id":"43413f533dca4641","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:40:03.365606Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:40:03.368744Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:40:03.369502Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:40:03.377135Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.70:2379"}
	{"level":"info","ts":"2024-08-29T19:40:03.377658Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:40:03.378417Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T19:40:03.369536Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"43413f533dca4641","local-member-id":"755e1e1acc6a8bb3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:03.378580Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:03.378618Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:03.369991Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:40:03.391024Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:49:24 up 14 min,  0 users,  load average: 0.29, 0.16, 0.12
	Linux default-k8s-diff-port-672127 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [21c253f2be8a7770ae53a7ebe3387b8b9ffac2c47439aab92c182013ed3f9266] <==
	W0829 19:39:58.799458       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.806073       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.815632       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.849290       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.913853       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.959124       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.987908       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.996577       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.040732       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.073684       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.094129       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.116858       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.157003       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.167572       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.237048       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.254633       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.312152       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.314859       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.433268       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.434635       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.521083       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.545894       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.575188       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.675761       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.767501       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8e9dc4baf0d69e3869f713b211d9f67465b96abe843f5578b9be3ebb9b8f0126] <==
	W0829 19:45:05.915176       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:45:05.915366       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 19:45:05.916516       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:45:05.916572       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:46:05.917150       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:46:05.917435       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 19:46:05.917159       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:46:05.917572       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 19:46:05.918733       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:46:05.918805       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:48:05.919361       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:48:05.919692       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 19:48:05.919761       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:48:05.919790       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 19:48:05.920966       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:48:05.921012       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [426978e9357aa278ce81b3fee9d9f96b0a2cd12753daf1617d02becfc623cbb2] <==
	E0829 19:44:11.805081       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:44:12.337034       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:44:41.811518       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:44:42.344670       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:45:11.819113       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:45:12.353738       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:45:22.897369       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-672127"
	E0829 19:45:41.825174       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:45:42.361864       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:46:11.831195       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:46:12.370114       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:46:14.460653       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="207.614µs"
	I0829 19:46:27.462748       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="220.931µs"
	E0829 19:46:41.837967       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:46:42.376796       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:47:11.844107       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:47:12.385590       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:47:41.850839       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:47:42.394731       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:48:11.857718       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:48:12.402289       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:48:41.864597       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:48:42.409610       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:49:11.871440       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:49:12.418100       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [046b89ea511cf12db05b7fa3ffd5bef78b13fe226543cc3e898fac1885518f19] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:40:13.487962       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:40:13.498462       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.70"]
	E0829 19:40:13.498538       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:40:13.594201       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:40:13.594240       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:40:13.594292       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:40:13.599117       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:40:13.599341       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:40:13.599363       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:40:13.601247       1 config.go:197] "Starting service config controller"
	I0829 19:40:13.601275       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:40:13.601292       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:40:13.601302       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:40:13.601707       1 config.go:326] "Starting node config controller"
	I0829 19:40:13.601733       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:40:13.701455       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:40:13.701512       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:40:13.702630       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [13be5321a8c80c0020d830f57402d3a50b64f997dca0b352f757d34343265afa] <==
	W0829 19:40:04.946019       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 19:40:04.946133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.786413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 19:40:05.786455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.787813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:05.787890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.849241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0829 19:40:05.849369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.878250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 19:40:05.878545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.897536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 19:40:05.898037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.972522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 19:40:05.972654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.981736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 19:40:05.981869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.984647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 19:40:05.984804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:06.171369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:06.171467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:06.182311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:06.182406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:06.215292       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 19:40:06.215420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0829 19:40:06.537626       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:48:08 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:08.442677    2912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4p8qr" podUID="8026c5c8-9f02-45a1-8cc8-9d485dc49cbd"
	Aug 29 19:48:17 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:17.567438    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960897566412057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:17 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:17.567494    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960897566412057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:23 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:23.443463    2912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4p8qr" podUID="8026c5c8-9f02-45a1-8cc8-9d485dc49cbd"
	Aug 29 19:48:27 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:27.568871    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960907568545813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:27 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:27.568968    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960907568545813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:34 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:34.442748    2912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4p8qr" podUID="8026c5c8-9f02-45a1-8cc8-9d485dc49cbd"
	Aug 29 19:48:37 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:37.570218    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960917569901751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:37 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:37.570256    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960917569901751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:47 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:47.571270    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960927571056582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:47 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:47.571306    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960927571056582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:49 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:49.444059    2912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4p8qr" podUID="8026c5c8-9f02-45a1-8cc8-9d485dc49cbd"
	Aug 29 19:48:57 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:57.573525    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960937573096400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:48:57 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:48:57.573614    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960937573096400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:01 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:49:01.444353    2912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4p8qr" podUID="8026c5c8-9f02-45a1-8cc8-9d485dc49cbd"
	Aug 29 19:49:07 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:49:07.460351    2912 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:49:07 default-k8s-diff-port-672127 kubelet[2912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:49:07 default-k8s-diff-port-672127 kubelet[2912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:49:07 default-k8s-diff-port-672127 kubelet[2912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:49:07 default-k8s-diff-port-672127 kubelet[2912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:49:07 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:49:07.575033    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960947574415522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:07 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:49:07.575060    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960947574415522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:15 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:49:15.445117    2912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4p8qr" podUID="8026c5c8-9f02-45a1-8cc8-9d485dc49cbd"
	Aug 29 19:49:17 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:49:17.577104    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960957576707003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:17 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:49:17.577157    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960957576707003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [618b4f781c25c0bae32f14d563f92b13c00f2f8ba3cb26883d763e52b32aa53a] <==
	I0829 19:40:14.705277       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 19:40:14.716397       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 19:40:14.716851       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 19:40:14.729832       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 19:40:14.730064       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-672127_2bf8c601-7827-4d3c-9539-177b2122de9c!
	I0829 19:40:14.732719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c355f18f-abcb-4c93-bc0a-543056a89838", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-672127_2bf8c601-7827-4d3c-9539-177b2122de9c became leader
	I0829 19:40:14.830806       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-672127_2bf8c601-7827-4d3c-9539-177b2122de9c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-672127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4p8qr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-672127 describe pod metrics-server-6867b74b74-4p8qr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-672127 describe pod metrics-server-6867b74b74-4p8qr: exit status 1 (58.990086ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4p8qr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-672127 describe pod metrics-server-6867b74b74-4p8qr: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0829 19:41:41.164265   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:03.950993   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:26.706839   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-690795 -n no-preload-690795
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-29 19:50:07.797795835 +0000 UTC m=+6280.518588607
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-690795 -n no-preload-690795
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-690795 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-690795 logs -n 25: (2.006655167s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-633326 sudo cat                              | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo find                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo crio                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-633326                                       | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	| delete  | -p                                                     | disable-driver-mounts-831934 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | disable-driver-mounts-831934                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:28 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-690795             | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-920571            | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-672127  | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC | 29 Aug 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC |                     |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-690795                  | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC | 29 Aug 24 19:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-920571                 | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC | 29 Aug 24 19:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467349        | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-672127       | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:40 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467349             | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:31:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:31:58.737382   79869 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:31:58.737475   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737483   79869 out.go:358] Setting ErrFile to fd 2...
	I0829 19:31:58.737486   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737664   79869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:31:58.738216   79869 out.go:352] Setting JSON to false
	I0829 19:31:58.739096   79869 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8066,"bootTime":1724951853,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:31:58.739164   79869 start.go:139] virtualization: kvm guest
	I0829 19:31:58.741047   79869 out.go:177] * [old-k8s-version-467349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:31:58.742202   79869 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:31:58.742202   79869 notify.go:220] Checking for updates...
	I0829 19:31:58.744035   79869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:31:58.745212   79869 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:31:58.746330   79869 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:31:58.747599   79869 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:31:58.748625   79869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:31:58.749897   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:31:58.750353   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.750402   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.765128   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I0829 19:31:58.765502   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.765933   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.765952   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.766302   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.766478   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.768195   79869 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0829 19:31:58.769230   79869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:31:58.769562   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.769599   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.783969   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42955
	I0829 19:31:58.784329   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.784794   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.784809   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.785130   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.785335   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.821467   79869 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:31:58.822695   79869 start.go:297] selected driver: kvm2
	I0829 19:31:58.822708   79869 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.822845   79869 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:31:58.823799   79869 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.823887   79869 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:31:58.839098   79869 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:31:58.839445   79869 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:31:58.839504   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:31:58.839519   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:31:58.839556   79869 start.go:340] cluster config:
	{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.839650   79869 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.841263   79869 out.go:177] * Starting "old-k8s-version-467349" primary control-plane node in "old-k8s-version-467349" cluster
	I0829 19:31:58.842265   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:31:58.842301   79869 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:31:58.842310   79869 cache.go:56] Caching tarball of preloaded images
	I0829 19:31:58.842386   79869 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:31:58.842396   79869 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0829 19:31:58.842476   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:31:58.842637   79869 start.go:360] acquireMachinesLock for old-k8s-version-467349: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:32:00.606343   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:03.678411   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:09.758354   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:12.830416   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:18.910387   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:21.982407   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:28.062408   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:31.134407   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:37.214369   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:40.286345   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:46.366360   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:49.438406   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:55.518437   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:58.590377   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:04.670397   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:07.742436   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:13.822348   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:16.894422   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:22.974353   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:26.046337   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:32.126325   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:35.198391   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:41.278353   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:44.350421   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:50.434297   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:53.502296   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:59.582448   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:02.654443   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:08.734358   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:11.806435   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:17.886372   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:20.958351   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:27.038356   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:30.110387   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:33.114600   79073 start.go:364] duration metric: took 4m24.136110592s to acquireMachinesLock for "embed-certs-920571"
	I0829 19:34:33.114658   79073 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:34:33.114666   79073 fix.go:54] fixHost starting: 
	I0829 19:34:33.115014   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:33.115043   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:33.130652   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34641
	I0829 19:34:33.131096   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:33.131536   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:34:33.131555   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:33.131871   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:33.132060   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:33.132217   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:34:33.133784   79073 fix.go:112] recreateIfNeeded on embed-certs-920571: state=Stopped err=<nil>
	I0829 19:34:33.133809   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	W0829 19:34:33.133951   79073 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:34:33.135573   79073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-920571" ...
	I0829 19:34:33.136726   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Start
	I0829 19:34:33.136873   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring networks are active...
	I0829 19:34:33.137613   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring network default is active
	I0829 19:34:33.137909   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring network mk-embed-certs-920571 is active
	I0829 19:34:33.138400   79073 main.go:141] libmachine: (embed-certs-920571) Getting domain xml...
	I0829 19:34:33.139091   79073 main.go:141] libmachine: (embed-certs-920571) Creating domain...
	I0829 19:34:33.112327   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:34:33.112363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:34:33.112705   78865 buildroot.go:166] provisioning hostname "no-preload-690795"
	I0829 19:34:33.112736   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:34:33.112943   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:34:33.114457   78865 machine.go:96] duration metric: took 4m37.430735456s to provisionDockerMachine
	I0829 19:34:33.114505   78865 fix.go:56] duration metric: took 4m37.452542806s for fixHost
	I0829 19:34:33.114516   78865 start.go:83] releasing machines lock for "no-preload-690795", held for 4m37.452590646s
	W0829 19:34:33.114545   78865 start.go:714] error starting host: provision: host is not running
	W0829 19:34:33.114637   78865 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 19:34:33.114647   78865 start.go:729] Will try again in 5 seconds ...
	I0829 19:34:34.366249   79073 main.go:141] libmachine: (embed-certs-920571) Waiting to get IP...
	I0829 19:34:34.367233   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:34.367595   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:34.367671   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:34.367580   80412 retry.go:31] will retry after 294.1031ms: waiting for machine to come up
	I0829 19:34:34.663229   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:34.663677   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:34.663709   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:34.663624   80412 retry.go:31] will retry after 345.352879ms: waiting for machine to come up
	I0829 19:34:35.010102   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.010576   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.010604   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.010527   80412 retry.go:31] will retry after 295.49024ms: waiting for machine to come up
	I0829 19:34:35.308077   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.308580   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.308608   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.308525   80412 retry.go:31] will retry after 575.095429ms: waiting for machine to come up
	I0829 19:34:35.885400   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.885806   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.885835   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.885762   80412 retry.go:31] will retry after 524.463725ms: waiting for machine to come up
	I0829 19:34:36.411496   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:36.411840   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:36.411866   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:36.411802   80412 retry.go:31] will retry after 672.277111ms: waiting for machine to come up
	I0829 19:34:37.085978   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:37.086512   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:37.086537   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:37.086473   80412 retry.go:31] will retry after 1.185875442s: waiting for machine to come up
	I0829 19:34:38.274401   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:38.274881   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:38.274914   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:38.274827   80412 retry.go:31] will retry after 1.426721352s: waiting for machine to come up
	I0829 19:34:38.116486   78865 start.go:360] acquireMachinesLock for no-preload-690795: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:34:39.703333   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:39.703732   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:39.703756   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:39.703691   80412 retry.go:31] will retry after 1.500429564s: waiting for machine to come up
	I0829 19:34:41.206311   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:41.206854   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:41.206882   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:41.206766   80412 retry.go:31] will retry after 2.021866027s: waiting for machine to come up
	I0829 19:34:43.230915   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:43.231329   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:43.231382   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:43.231318   80412 retry.go:31] will retry after 2.415112477s: waiting for machine to come up
	I0829 19:34:45.649815   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:45.650169   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:45.650221   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:45.650140   80412 retry.go:31] will retry after 3.292956483s: waiting for machine to come up
	I0829 19:34:50.094786   79559 start.go:364] duration metric: took 3m31.488453615s to acquireMachinesLock for "default-k8s-diff-port-672127"
	I0829 19:34:50.094847   79559 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:34:50.094857   79559 fix.go:54] fixHost starting: 
	I0829 19:34:50.095330   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:50.095367   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:50.112044   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0829 19:34:50.112510   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:50.112941   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:34:50.112964   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:50.113325   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:50.113522   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:34:50.113663   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:34:50.115335   79559 fix.go:112] recreateIfNeeded on default-k8s-diff-port-672127: state=Stopped err=<nil>
	I0829 19:34:50.115378   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	W0829 19:34:50.115548   79559 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:34:50.117176   79559 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-672127" ...
	I0829 19:34:48.944274   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.944748   79073 main.go:141] libmachine: (embed-certs-920571) Found IP for machine: 192.168.61.243
	I0829 19:34:48.944776   79073 main.go:141] libmachine: (embed-certs-920571) Reserving static IP address...
	I0829 19:34:48.944793   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has current primary IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.945167   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "embed-certs-920571", mac: "52:54:00:35:28:22", ip: "192.168.61.243"} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:48.945195   79073 main.go:141] libmachine: (embed-certs-920571) Reserved static IP address: 192.168.61.243
	I0829 19:34:48.945214   79073 main.go:141] libmachine: (embed-certs-920571) DBG | skip adding static IP to network mk-embed-certs-920571 - found existing host DHCP lease matching {name: "embed-certs-920571", mac: "52:54:00:35:28:22", ip: "192.168.61.243"}
	I0829 19:34:48.945225   79073 main.go:141] libmachine: (embed-certs-920571) Waiting for SSH to be available...
	I0829 19:34:48.945236   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Getting to WaitForSSH function...
	I0829 19:34:48.947646   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.948004   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:48.948034   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.948132   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Using SSH client type: external
	I0829 19:34:48.948162   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa (-rw-------)
	I0829 19:34:48.948280   79073 main.go:141] libmachine: (embed-certs-920571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:34:48.948307   79073 main.go:141] libmachine: (embed-certs-920571) DBG | About to run SSH command:
	I0829 19:34:48.948328   79073 main.go:141] libmachine: (embed-certs-920571) DBG | exit 0
	I0829 19:34:49.073781   79073 main.go:141] libmachine: (embed-certs-920571) DBG | SSH cmd err, output: <nil>: 
	I0829 19:34:49.074184   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetConfigRaw
	I0829 19:34:49.074813   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:49.077014   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.077349   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.077369   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.077550   79073 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/config.json ...
	I0829 19:34:49.077724   79073 machine.go:93] provisionDockerMachine start ...
	I0829 19:34:49.077739   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:49.077936   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.080112   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.080448   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.080472   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.080548   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.080715   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.080853   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.080983   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.081110   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.081294   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.081306   79073 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:34:49.182232   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:34:49.182282   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.182556   79073 buildroot.go:166] provisioning hostname "embed-certs-920571"
	I0829 19:34:49.182582   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.182783   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.185368   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.185727   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.185751   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.185901   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.186077   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.186237   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.186379   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.186505   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.186721   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.186740   79073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-920571 && echo "embed-certs-920571" | sudo tee /etc/hostname
	I0829 19:34:49.300225   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-920571
	
	I0829 19:34:49.300261   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.303129   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.303497   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.303528   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.303682   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.303883   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.304061   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.304193   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.304466   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.304650   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.304667   79073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-920571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-920571/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-920571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:34:49.413678   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:34:49.413710   79073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:34:49.413765   79073 buildroot.go:174] setting up certificates
	I0829 19:34:49.413774   79073 provision.go:84] configureAuth start
	I0829 19:34:49.413786   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.414069   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:49.416618   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.416965   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.416993   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.417143   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.419308   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.419585   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.419630   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.419746   79073 provision.go:143] copyHostCerts
	I0829 19:34:49.419802   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:34:49.419820   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:34:49.419882   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:34:49.419973   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:34:49.419981   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:34:49.420005   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:34:49.420055   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:34:49.420063   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:34:49.420083   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:34:49.420129   79073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.embed-certs-920571 san=[127.0.0.1 192.168.61.243 embed-certs-920571 localhost minikube]
	I0829 19:34:49.488345   79073 provision.go:177] copyRemoteCerts
	I0829 19:34:49.488396   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:34:49.488418   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.490954   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.491290   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.491328   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.491473   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.491667   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.491794   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.491932   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:49.571847   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:34:49.594401   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 19:34:49.615988   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:34:49.638030   79073 provision.go:87] duration metric: took 224.241128ms to configureAuth
	I0829 19:34:49.638058   79073 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:34:49.638251   79073 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:34:49.638342   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.640876   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.641214   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.641244   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.641439   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.641662   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.641818   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.641941   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.642126   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.642292   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.642307   79073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:34:49.862247   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:34:49.862276   79073 machine.go:96] duration metric: took 784.541058ms to provisionDockerMachine
	I0829 19:34:49.862286   79073 start.go:293] postStartSetup for "embed-certs-920571" (driver="kvm2")
	I0829 19:34:49.862296   79073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:34:49.862325   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:49.862632   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:34:49.862660   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.865463   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.865871   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.865899   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.866068   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.866285   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.866459   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.866644   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:49.948826   79073 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:34:49.952779   79073 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:34:49.952800   79073 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:34:49.952858   79073 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:34:49.952935   79073 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:34:49.953034   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:34:49.962083   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:34:49.986910   79073 start.go:296] duration metric: took 124.612025ms for postStartSetup
	I0829 19:34:49.986944   79073 fix.go:56] duration metric: took 16.872279139s for fixHost
	I0829 19:34:49.986964   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.989581   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.989919   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.989946   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.990080   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.990281   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.990519   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.990662   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.990835   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.991009   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.991020   79073 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:34:50.094598   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960090.067799977
	
	I0829 19:34:50.094618   79073 fix.go:216] guest clock: 1724960090.067799977
	I0829 19:34:50.094626   79073 fix.go:229] Guest: 2024-08-29 19:34:50.067799977 +0000 UTC Remote: 2024-08-29 19:34:49.98694779 +0000 UTC m=+281.148944887 (delta=80.852187ms)
	I0829 19:34:50.094667   79073 fix.go:200] guest clock delta is within tolerance: 80.852187ms
	I0829 19:34:50.094672   79073 start.go:83] releasing machines lock for "embed-certs-920571", held for 16.98003549s
	I0829 19:34:50.094697   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.094962   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:50.097867   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.098301   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.098331   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.098494   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099007   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099190   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099276   79073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:34:50.099322   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:50.099429   79073 ssh_runner.go:195] Run: cat /version.json
	I0829 19:34:50.099453   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:50.101909   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.101932   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102283   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.102311   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102342   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.102363   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102460   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:50.102556   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:50.102647   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:50.102717   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:50.102818   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:50.102899   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:50.102964   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:50.103032   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:50.178744   79073 ssh_runner.go:195] Run: systemctl --version
	I0829 19:34:50.220024   79073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:34:50.370308   79073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:34:50.379363   79073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:34:50.379435   79073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:34:50.394787   79073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:34:50.394810   79073 start.go:495] detecting cgroup driver to use...
	I0829 19:34:50.394886   79073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:34:50.410061   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:34:50.423846   79073 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:34:50.423910   79073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:34:50.437117   79073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:34:50.450318   79073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:34:50.563588   79073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:34:50.706261   79073 docker.go:233] disabling docker service ...
	I0829 19:34:50.706356   79073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:34:50.721443   79073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:34:50.734284   79073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:34:50.871611   79073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:34:51.006487   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:34:51.019543   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:34:51.036398   79073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:34:51.036444   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.045884   79073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:34:51.045931   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.055634   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.065379   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.075104   79073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:34:51.085560   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.095777   79073 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.114679   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.125695   79073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:34:51.135263   79073 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:34:51.135328   79073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:34:51.148534   79073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:34:51.158658   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:34:51.281185   79073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:34:51.378558   79073 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:34:51.378618   79073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:34:51.383580   79073 start.go:563] Will wait 60s for crictl version
	I0829 19:34:51.383638   79073 ssh_runner.go:195] Run: which crictl
	I0829 19:34:51.387081   79073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:34:51.426413   79073 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:34:51.426491   79073 ssh_runner.go:195] Run: crio --version
	I0829 19:34:51.453777   79073 ssh_runner.go:195] Run: crio --version
	I0829 19:34:51.481306   79073 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:34:50.118500   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Start
	I0829 19:34:50.118776   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring networks are active...
	I0829 19:34:50.119618   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring network default is active
	I0829 19:34:50.120105   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring network mk-default-k8s-diff-port-672127 is active
	I0829 19:34:50.120501   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Getting domain xml...
	I0829 19:34:50.121238   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Creating domain...
	I0829 19:34:51.414344   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting to get IP...
	I0829 19:34:51.415308   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.415709   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.415790   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:51.415692   80540 retry.go:31] will retry after 256.92247ms: waiting for machine to come up
	I0829 19:34:51.674173   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.674728   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.674754   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:51.674670   80540 retry.go:31] will retry after 338.812858ms: waiting for machine to come up
	I0829 19:34:52.015450   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.015977   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.016009   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.015920   80540 retry.go:31] will retry after 385.497306ms: waiting for machine to come up
	I0829 19:34:52.403718   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.404324   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.404361   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.404259   80540 retry.go:31] will retry after 536.615454ms: waiting for machine to come up
	I0829 19:34:52.943166   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.943709   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.943736   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.943678   80540 retry.go:31] will retry after 584.895039ms: waiting for machine to come up
	I0829 19:34:51.482485   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:51.485272   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:51.485599   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:51.485632   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:51.485803   79073 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 19:34:51.490493   79073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:34:51.505212   79073 kubeadm.go:883] updating cluster {Name:embed-certs-920571 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:34:51.505359   79073 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:34:51.505413   79073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:34:51.539415   79073 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:34:51.539485   79073 ssh_runner.go:195] Run: which lz4
	I0829 19:34:51.543107   79073 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:34:51.546831   79073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:34:51.546864   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:34:52.815579   79073 crio.go:462] duration metric: took 1.272496626s to copy over tarball
	I0829 19:34:52.815659   79073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:34:53.530873   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:53.531510   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:53.531540   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:53.531452   80540 retry.go:31] will retry after 790.882954ms: waiting for machine to come up
	I0829 19:34:54.324385   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:54.324785   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:54.324817   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:54.324706   80540 retry.go:31] will retry after 815.842176ms: waiting for machine to come up
	I0829 19:34:55.142878   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:55.143350   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:55.143375   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:55.143325   80540 retry.go:31] will retry after 1.177682749s: waiting for machine to come up
	I0829 19:34:56.322780   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:56.323215   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:56.323248   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:56.323160   80540 retry.go:31] will retry after 1.158169512s: waiting for machine to come up
	I0829 19:34:57.483529   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:57.483990   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:57.484023   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:57.483917   80540 retry.go:31] will retry after 1.631842784s: waiting for machine to come up
	I0829 19:34:54.931044   79073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.115353131s)
	I0829 19:34:54.931077   79073 crio.go:469] duration metric: took 2.115468165s to extract the tarball
	I0829 19:34:54.931086   79073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:34:54.967902   79073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:34:55.006987   79073 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:34:55.007010   79073 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:34:55.007017   79073 kubeadm.go:934] updating node { 192.168.61.243 8443 v1.31.0 crio true true} ...
	I0829 19:34:55.007123   79073 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-920571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:34:55.007187   79073 ssh_runner.go:195] Run: crio config
	I0829 19:34:55.051987   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:34:55.052016   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:34:55.052039   79073 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:34:55.052077   79073 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-920571 NodeName:embed-certs-920571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:34:55.052254   79073 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-920571"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:34:55.052337   79073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:34:55.061509   79073 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:34:55.061586   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:34:55.070182   79073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 19:34:55.086180   79073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:34:55.103184   79073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 19:34:55.119226   79073 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0829 19:34:55.122845   79073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:34:55.133782   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:34:55.266431   79073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:34:55.283043   79073 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571 for IP: 192.168.61.243
	I0829 19:34:55.283066   79073 certs.go:194] generating shared ca certs ...
	I0829 19:34:55.283081   79073 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:34:55.283237   79073 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:34:55.283287   79073 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:34:55.283297   79073 certs.go:256] generating profile certs ...
	I0829 19:34:55.283438   79073 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/client.key
	I0829 19:34:55.283519   79073 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.key.dda9dcff
	I0829 19:34:55.283573   79073 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.key
	I0829 19:34:55.283708   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:34:55.283773   79073 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:34:55.283793   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:34:55.283831   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:34:55.283869   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:34:55.283901   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:34:55.283957   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:34:55.284835   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:34:55.330384   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:34:55.366718   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:34:55.393815   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:34:55.436855   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 19:34:55.463343   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:34:55.487693   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:34:55.511657   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:34:55.536017   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:34:55.558298   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:34:55.579840   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:34:55.601271   79073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:34:55.616634   79073 ssh_runner.go:195] Run: openssl version
	I0829 19:34:55.621890   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:34:55.633224   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.637431   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.637486   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.643034   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:34:55.654607   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:34:55.666297   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.670433   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.670492   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.675787   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:34:55.686953   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:34:55.697241   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.701133   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.701189   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.706242   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:34:55.716165   79073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:34:55.720159   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:34:55.727612   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:34:55.734806   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:34:55.742352   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:34:55.749483   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:34:55.756543   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:34:55.763413   79073 kubeadm.go:392] StartCluster: {Name:embed-certs-920571 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:34:55.763499   79073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:34:55.763537   79073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:34:55.803136   79073 cri.go:89] found id: ""
	I0829 19:34:55.803219   79073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:34:55.812851   79073 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:34:55.812868   79073 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:34:55.812907   79073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:34:55.823461   79073 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:34:55.824969   79073 kubeconfig.go:125] found "embed-certs-920571" server: "https://192.168.61.243:8443"
	I0829 19:34:55.828095   79073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:34:55.838579   79073 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.243
	I0829 19:34:55.838616   79073 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:34:55.838626   79073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:34:55.838669   79073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:34:55.876618   79073 cri.go:89] found id: ""
	I0829 19:34:55.876674   79073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:34:55.893401   79073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:34:55.902557   79073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:34:55.902579   79073 kubeadm.go:157] found existing configuration files:
	
	I0829 19:34:55.902631   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:34:55.911349   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:34:55.911407   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:34:55.920377   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:34:55.928764   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:34:55.928824   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:34:55.937630   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:34:55.945836   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:34:55.945897   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:34:55.954491   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:34:55.962466   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:34:55.962517   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:34:55.971080   79073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:34:55.979709   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:56.086301   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.378119   79073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.29178222s)
	I0829 19:34:57.378153   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.574026   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.655499   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.755371   79073 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:34:57.755457   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:58.255939   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:58.755813   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.117916   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:59.118404   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:59.118427   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:59.118355   80540 retry.go:31] will retry after 2.806936823s: waiting for machine to come up
	I0829 19:35:01.927079   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:01.927450   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:35:01.927473   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:35:01.927422   80540 retry.go:31] will retry after 3.008556566s: waiting for machine to come up
	I0829 19:34:59.255536   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.756296   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.802484   79073 api_server.go:72] duration metric: took 2.047112988s to wait for apiserver process to appear ...
	I0829 19:34:59.802516   79073 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:34:59.802537   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:34:59.803088   79073 api_server.go:269] stopped: https://192.168.61.243:8443/healthz: Get "https://192.168.61.243:8443/healthz": dial tcp 192.168.61.243:8443: connect: connection refused
	I0829 19:35:00.302707   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.439793   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:02.439825   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:02.439837   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.482217   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:02.482245   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:02.802617   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.811079   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:02.811116   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:03.303128   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:03.307613   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:03.307657   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:03.803189   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:03.809164   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0829 19:35:03.816623   79073 api_server.go:141] control plane version: v1.31.0
	I0829 19:35:03.816649   79073 api_server.go:131] duration metric: took 4.014126212s to wait for apiserver health ...
	I0829 19:35:03.816657   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:35:03.816664   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:03.818484   79073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:35:03.819706   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:35:03.833365   79073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:35:03.851607   79073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:35:03.861274   79073 system_pods.go:59] 8 kube-system pods found
	I0829 19:35:03.861313   79073 system_pods.go:61] "coredns-6f6b679f8f-2wrn6" [05e03841-faab-4fd4-88c9-199b39a71ba6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:35:03.861320   79073 system_pods.go:61] "etcd-embed-certs-920571" [5545a51a-3b76-4b39-b347-6f68b8d7edbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:35:03.861328   79073 system_pods.go:61] "kube-apiserver-embed-certs-920571" [cecb3e4e-9d55-4dc9-8d14-884ffbf56475] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:35:03.861334   79073 system_pods.go:61] "kube-controller-manager-embed-certs-920571" [77e06ace-0262-418f-b41c-700aabf2fa1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:35:03.861338   79073 system_pods.go:61] "kube-proxy-hflpk" [a57a1785-8ccf-4955-b5b2-19c72032d9f5] Running
	I0829 19:35:03.861353   79073 system_pods.go:61] "kube-scheduler-embed-certs-920571" [bdb2ed9c-3bf2-4e91-b6a4-ba947dab93ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:35:03.861359   79073 system_pods.go:61] "metrics-server-6867b74b74-xs5gp" [98380519-4a65-4208-b9cc-f1941a5c2f01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:35:03.861362   79073 system_pods.go:61] "storage-provisioner" [d18a769f-283f-4db3-aad0-82fc0267980f] Running
	I0829 19:35:03.861368   79073 system_pods.go:74] duration metric: took 9.738329ms to wait for pod list to return data ...
	I0829 19:35:03.861375   79073 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:35:03.865311   79073 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:35:03.865341   79073 node_conditions.go:123] node cpu capacity is 2
	I0829 19:35:03.865355   79073 node_conditions.go:105] duration metric: took 3.974661ms to run NodePressure ...
	I0829 19:35:03.865373   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:04.939084   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:04.939532   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:35:04.939567   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:35:04.939479   80540 retry.go:31] will retry after 3.738266407s: waiting for machine to come up
	I0829 19:35:04.123411   79073 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:35:04.127613   79073 kubeadm.go:739] kubelet initialised
	I0829 19:35:04.127639   79073 kubeadm.go:740] duration metric: took 4.197494ms waiting for restarted kubelet to initialise ...
	I0829 19:35:04.127649   79073 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:35:04.132339   79073 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.136884   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.136909   79073 pod_ready.go:82] duration metric: took 4.548897ms for pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.136917   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.136927   79073 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.141014   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "etcd-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.141037   79073 pod_ready.go:82] duration metric: took 4.103179ms for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.141048   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "etcd-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.141062   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.144778   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.144799   79073 pod_ready.go:82] duration metric: took 3.728001ms for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.144807   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.144812   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.255204   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.255227   79073 pod_ready.go:82] duration metric: took 110.408053ms for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.255247   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.255253   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hflpk" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.656086   79073 pod_ready.go:93] pod "kube-proxy-hflpk" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:04.656124   79073 pod_ready.go:82] duration metric: took 400.860776ms for pod "kube-proxy-hflpk" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.656137   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:06.674533   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:09.990963   79869 start.go:364] duration metric: took 3m11.14829615s to acquireMachinesLock for "old-k8s-version-467349"
	I0829 19:35:09.991026   79869 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:09.991035   79869 fix.go:54] fixHost starting: 
	I0829 19:35:09.991429   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:09.991472   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:10.011456   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0829 19:35:10.011867   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:10.012413   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:35:10.012445   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:10.012752   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:10.012960   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:10.013132   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetState
	I0829 19:35:10.014878   79869 fix.go:112] recreateIfNeeded on old-k8s-version-467349: state=Stopped err=<nil>
	I0829 19:35:10.014907   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	W0829 19:35:10.015055   79869 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:10.016684   79869 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467349" ...
	I0829 19:35:08.681559   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.682042   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Found IP for machine: 192.168.50.70
	I0829 19:35:08.682070   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has current primary IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.682080   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Reserving static IP address...
	I0829 19:35:08.682524   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Reserved static IP address: 192.168.50.70
	I0829 19:35:08.682564   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-672127", mac: "52:54:00:db:a8:cf", ip: "192.168.50.70"} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.682580   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for SSH to be available...
	I0829 19:35:08.682609   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | skip adding static IP to network mk-default-k8s-diff-port-672127 - found existing host DHCP lease matching {name: "default-k8s-diff-port-672127", mac: "52:54:00:db:a8:cf", ip: "192.168.50.70"}
	I0829 19:35:08.682623   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Getting to WaitForSSH function...
	I0829 19:35:08.684466   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.684816   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.684876   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.684957   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Using SSH client type: external
	I0829 19:35:08.684982   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa (-rw-------)
	I0829 19:35:08.685032   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:08.685053   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | About to run SSH command:
	I0829 19:35:08.685069   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | exit 0
	I0829 19:35:08.806174   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:08.806493   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetConfigRaw
	I0829 19:35:08.807134   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:08.809574   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.809900   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.809924   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.810227   79559 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/config.json ...
	I0829 19:35:08.810457   79559 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:08.810478   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:08.810675   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:08.812964   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.813337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.813368   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.813620   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:08.813815   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.813994   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.814161   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:08.814338   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:08.814533   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:08.814544   79559 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:08.914370   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:08.914415   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:08.914742   79559 buildroot.go:166] provisioning hostname "default-k8s-diff-port-672127"
	I0829 19:35:08.914782   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:08.914975   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:08.918471   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.918829   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.918857   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.919021   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:08.919186   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.919373   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.919483   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:08.919664   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:08.919865   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:08.919884   79559 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-672127 && echo "default-k8s-diff-port-672127" | sudo tee /etc/hostname
	I0829 19:35:09.032573   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-672127
	
	I0829 19:35:09.032606   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.035434   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.035811   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.035840   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.035999   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.036182   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.036350   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.036465   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.036651   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.036833   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.036852   79559 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-672127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-672127/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-672127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:09.142908   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:09.142937   79559 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:09.142978   79559 buildroot.go:174] setting up certificates
	I0829 19:35:09.142995   79559 provision.go:84] configureAuth start
	I0829 19:35:09.143010   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:09.143258   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:09.145947   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.146313   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.146339   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.146460   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.148631   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.148953   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.148978   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.149128   79559 provision.go:143] copyHostCerts
	I0829 19:35:09.149188   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:09.149204   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:09.149261   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:09.149368   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:09.149378   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:09.149400   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:09.149492   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:09.149501   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:09.149520   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:09.149578   79559 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-672127 san=[127.0.0.1 192.168.50.70 default-k8s-diff-port-672127 localhost minikube]
	I0829 19:35:09.370220   79559 provision.go:177] copyRemoteCerts
	I0829 19:35:09.370277   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:09.370301   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.373233   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.373723   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.373756   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.373966   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.374180   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.374342   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.374496   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.457104   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:35:09.481139   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:09.504611   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 19:35:09.529597   79559 provision.go:87] duration metric: took 386.586301ms to configureAuth
	I0829 19:35:09.529628   79559 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:09.529887   79559 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:35:09.529989   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.532809   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.533309   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.533342   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.533509   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.533743   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.533965   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.534169   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.534372   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.534523   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.534545   79559 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:09.754724   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:09.754752   79559 machine.go:96] duration metric: took 944.279776ms to provisionDockerMachine
	I0829 19:35:09.754766   79559 start.go:293] postStartSetup for "default-k8s-diff-port-672127" (driver="kvm2")
	I0829 19:35:09.754781   79559 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:09.754807   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.755236   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:09.755270   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.757713   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.758079   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.758125   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.758274   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.758466   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.758682   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.758823   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.841022   79559 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:09.846051   79559 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:09.846081   79559 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:09.846163   79559 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:09.846254   79559 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:09.846379   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:09.857443   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:09.884662   79559 start.go:296] duration metric: took 129.87923ms for postStartSetup
	I0829 19:35:09.884715   79559 fix.go:56] duration metric: took 19.789853711s for fixHost
	I0829 19:35:09.884739   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.888011   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.888562   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.888593   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.888789   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.888976   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.889188   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.889347   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.889533   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.889723   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.889736   79559 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:09.990749   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960109.967111721
	
	I0829 19:35:09.990772   79559 fix.go:216] guest clock: 1724960109.967111721
	I0829 19:35:09.990782   79559 fix.go:229] Guest: 2024-08-29 19:35:09.967111721 +0000 UTC Remote: 2024-08-29 19:35:09.884720437 +0000 UTC m=+231.415600706 (delta=82.391284ms)
	I0829 19:35:09.990835   79559 fix.go:200] guest clock delta is within tolerance: 82.391284ms
	I0829 19:35:09.990846   79559 start.go:83] releasing machines lock for "default-k8s-diff-port-672127", held for 19.896020367s
	I0829 19:35:09.990891   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.991180   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:09.994076   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.994434   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.994459   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.994613   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995121   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995318   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995407   79559 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:09.995464   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.995531   79559 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:09.995569   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.998302   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998673   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.998703   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998732   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.998750   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998832   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.998976   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.999026   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.999109   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.999162   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.999249   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.999404   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.999395   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:10.124503   79559 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:10.130734   79559 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:10.275859   79559 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:10.281662   79559 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:10.281728   79559 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:10.297464   79559 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:10.297488   79559 start.go:495] detecting cgroup driver to use...
	I0829 19:35:10.297553   79559 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:10.316686   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:10.332836   79559 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:10.332880   79559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:10.347021   79559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:10.364479   79559 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:10.506136   79559 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:10.659246   79559 docker.go:233] disabling docker service ...
	I0829 19:35:10.659324   79559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:10.678953   79559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:10.694844   79559 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:10.837509   79559 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:10.976512   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:10.993421   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:11.013434   79559 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:35:11.013492   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.023909   79559 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:11.023980   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.038560   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.049911   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.060235   79559 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:11.076772   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.093357   79559 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.110140   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.121770   79559 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:11.131641   79559 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:11.131697   79559 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:11.151460   79559 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:11.161320   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:11.286180   79559 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:11.382235   79559 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:11.382312   79559 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:11.388226   79559 start.go:563] Will wait 60s for crictl version
	I0829 19:35:11.388299   79559 ssh_runner.go:195] Run: which crictl
	I0829 19:35:11.391832   79559 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:11.429509   79559 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:11.429601   79559 ssh_runner.go:195] Run: crio --version
	I0829 19:35:11.457180   79559 ssh_runner.go:195] Run: crio --version
	I0829 19:35:11.487106   79559 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:35:11.488483   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:11.491607   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:11.491988   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:11.492027   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:11.492316   79559 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:11.496448   79559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:11.512045   79559 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-672127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:11.512159   79559 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:35:11.512219   79559 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:11.549212   79559 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:35:11.549287   79559 ssh_runner.go:195] Run: which lz4
	I0829 19:35:11.554151   79559 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:11.558691   79559 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:11.558718   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:35:12.826290   79559 crio.go:462] duration metric: took 1.272173781s to copy over tarball
	I0829 19:35:12.826387   79559 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:10.017965   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .Start
	I0829 19:35:10.018195   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring networks are active...
	I0829 19:35:10.018992   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network default is active
	I0829 19:35:10.019360   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network mk-old-k8s-version-467349 is active
	I0829 19:35:10.019708   79869 main.go:141] libmachine: (old-k8s-version-467349) Getting domain xml...
	I0829 19:35:10.020408   79869 main.go:141] libmachine: (old-k8s-version-467349) Creating domain...
	I0829 19:35:11.298443   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting to get IP...
	I0829 19:35:11.299521   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.300063   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.300152   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.300048   80714 retry.go:31] will retry after 253.519755ms: waiting for machine to come up
	I0829 19:35:11.555694   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.556242   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.556274   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.556187   80714 retry.go:31] will retry after 375.22671ms: waiting for machine to come up
	I0829 19:35:11.932780   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.933206   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.933233   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.933176   80714 retry.go:31] will retry after 329.139276ms: waiting for machine to come up
	I0829 19:35:12.263804   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.264471   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.264501   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.264437   80714 retry.go:31] will retry after 434.457682ms: waiting for machine to come up
	I0829 19:35:12.701184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.701773   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.701805   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.701691   80714 retry.go:31] will retry after 555.961608ms: waiting for machine to come up
	I0829 19:35:13.259670   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:13.260159   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:13.260184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:13.260080   80714 retry.go:31] will retry after 814.491179ms: waiting for machine to come up
	I0829 19:35:09.162551   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:11.165654   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:13.662027   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:15.034221   79559 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.207800368s)
	I0829 19:35:15.034254   79559 crio.go:469] duration metric: took 2.207935139s to extract the tarball
	I0829 19:35:15.034263   79559 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:15.070411   79559 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:15.117649   79559 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:35:15.117675   79559 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:35:15.117684   79559 kubeadm.go:934] updating node { 192.168.50.70 8444 v1.31.0 crio true true} ...
	I0829 19:35:15.117793   79559 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-672127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:15.117873   79559 ssh_runner.go:195] Run: crio config
	I0829 19:35:15.161749   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:35:15.161778   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:15.161795   79559 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:15.161815   79559 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-672127 NodeName:default-k8s-diff-port-672127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:35:15.161949   79559 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-672127"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:15.162002   79559 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:35:15.171789   79559 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:15.171858   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:15.181011   79559 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0829 19:35:15.197394   79559 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:15.213309   79559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0829 19:35:15.231088   79559 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:15.234732   79559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:15.245700   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:15.368430   79559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:15.385792   79559 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127 for IP: 192.168.50.70
	I0829 19:35:15.385820   79559 certs.go:194] generating shared ca certs ...
	I0829 19:35:15.385844   79559 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:15.386020   79559 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:15.386108   79559 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:15.386123   79559 certs.go:256] generating profile certs ...
	I0829 19:35:15.386240   79559 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/client.key
	I0829 19:35:15.386324   79559 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.key.828c23de
	I0829 19:35:15.386378   79559 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.key
	I0829 19:35:15.386523   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:15.386567   79559 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:15.386582   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:15.386615   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:15.386650   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:15.386680   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:15.386736   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:15.387663   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:15.429474   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:15.470861   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:15.514906   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:15.552474   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 19:35:15.581749   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:15.605874   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:15.629703   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:35:15.653589   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:15.680222   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:15.706824   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:15.733354   79559 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:15.753069   79559 ssh_runner.go:195] Run: openssl version
	I0829 19:35:15.759905   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:15.770507   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.776103   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.776159   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.783674   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:15.797519   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:15.809517   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.814243   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.814311   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.819834   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:15.830130   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:15.840473   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.844974   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.845033   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.850619   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:15.860955   79559 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:15.865359   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:15.871149   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:15.876982   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:15.882635   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:15.888020   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:15.893423   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:15.898989   79559 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-672127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:15.899085   79559 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:15.899156   79559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:15.939743   79559 cri.go:89] found id: ""
	I0829 19:35:15.939817   79559 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:15.949877   79559 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:15.949896   79559 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:15.949938   79559 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:15.959436   79559 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:15.960417   79559 kubeconfig.go:125] found "default-k8s-diff-port-672127" server: "https://192.168.50.70:8444"
	I0829 19:35:15.962469   79559 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:15.971672   79559 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0829 19:35:15.971700   79559 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:15.971710   79559 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:15.971777   79559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:16.015084   79559 cri.go:89] found id: ""
	I0829 19:35:16.015173   79559 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:16.031614   79559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:16.044359   79559 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:16.044384   79559 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:16.044448   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 19:35:16.056073   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:16.056139   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:16.066426   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 19:35:16.075300   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:16.075368   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:16.084795   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 19:35:16.093739   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:16.093804   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:16.103539   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 19:35:16.112676   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:16.112744   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:16.121997   79559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:16.134461   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:16.246853   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.577230   79559 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.330337638s)
	I0829 19:35:17.577271   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.810593   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.892546   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.993500   79559 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:17.993595   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:18.494169   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:14.076091   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.076599   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.076622   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.076549   80714 retry.go:31] will retry after 864.469682ms: waiting for machine to come up
	I0829 19:35:14.942675   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.943123   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.943154   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.943068   80714 retry.go:31] will retry after 1.062037578s: waiting for machine to come up
	I0829 19:35:16.006750   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:16.007301   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:16.007336   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:16.007212   80714 retry.go:31] will retry after 1.22747505s: waiting for machine to come up
	I0829 19:35:17.236788   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:17.237262   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:17.237291   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:17.237216   80714 retry.go:31] will retry after 1.663870598s: waiting for machine to come up
	I0829 19:35:15.662198   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:16.162890   79073 pod_ready.go:93] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:16.162919   79073 pod_ready.go:82] duration metric: took 11.506772145s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:16.162931   79073 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:18.170586   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:18.994574   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:19.493764   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:19.509384   79559 api_server.go:72] duration metric: took 1.515882118s to wait for apiserver process to appear ...
	I0829 19:35:19.509415   79559 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:35:19.509440   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:21.555577   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:21.555625   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:21.555642   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:21.572445   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:21.572481   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:22.009612   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:22.017592   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:22.017627   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:22.510148   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:22.516104   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:22.516140   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:23.009648   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:23.016342   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 200:
	ok
	I0829 19:35:23.022852   79559 api_server.go:141] control plane version: v1.31.0
	I0829 19:35:23.022878   79559 api_server.go:131] duration metric: took 3.513455745s to wait for apiserver health ...
	I0829 19:35:23.022889   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:35:23.022897   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:23.024557   79559 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:35:23.025764   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:35:23.035743   79559 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:35:23.075272   79559 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:35:23.091948   79559 system_pods.go:59] 8 kube-system pods found
	I0829 19:35:23.091991   79559 system_pods.go:61] "coredns-6f6b679f8f-p92hj" [736e7c46-b945-445f-a404-20a609f766e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:35:23.092004   79559 system_pods.go:61] "etcd-default-k8s-diff-port-672127" [cf016602-46cd-4972-bdd3-1ef5d881b6e0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:35:23.092014   79559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-672127" [eb51ac87-f5e4-4031-84fe-811da2ff8d63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:35:23.092026   79559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-672127" [caf7b777-935f-4351-b58d-60bb8175bec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:35:23.092034   79559 system_pods.go:61] "kube-proxy-tlc89" [9a11e5a6-b624-494b-8e94-d362b94fb98e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 19:35:23.092043   79559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-672127" [fe83e2af-b046-4d56-9b5c-d7a17db7e854] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:35:23.092053   79559 system_pods.go:61] "metrics-server-6867b74b74-tbkxg" [6d8f8c92-4f89-4a2a-8690-51a850768516] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:35:23.092065   79559 system_pods.go:61] "storage-provisioner" [7349bb79-c402-4587-ab0b-e52e5d455c61] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:35:23.092078   79559 system_pods.go:74] duration metric: took 16.779413ms to wait for pod list to return data ...
	I0829 19:35:23.092091   79559 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:35:23.099492   79559 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:35:23.099533   79559 node_conditions.go:123] node cpu capacity is 2
	I0829 19:35:23.099547   79559 node_conditions.go:105] duration metric: took 7.450351ms to run NodePressure ...
	I0829 19:35:23.099571   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:23.371279   79559 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:35:23.377322   79559 kubeadm.go:739] kubelet initialised
	I0829 19:35:23.377346   79559 kubeadm.go:740] duration metric: took 6.045074ms waiting for restarted kubelet to initialise ...
	I0829 19:35:23.377353   79559 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:35:23.384232   79559 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.391931   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.391960   79559 pod_ready.go:82] duration metric: took 7.702072ms for pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.391971   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.391980   79559 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.396708   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.396728   79559 pod_ready.go:82] duration metric: took 4.739691ms for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.396736   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.396744   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.401274   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.401298   79559 pod_ready.go:82] duration metric: took 4.546455ms for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.401308   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.401314   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:18.903082   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:18.903668   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:18.903691   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:18.903624   80714 retry.go:31] will retry after 2.012998698s: waiting for machine to come up
	I0829 19:35:20.918657   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:20.919143   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:20.919179   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:20.919066   80714 retry.go:31] will retry after 2.674645507s: waiting for machine to come up
	I0829 19:35:23.595218   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:23.595658   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:23.595685   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:23.595633   80714 retry.go:31] will retry after 3.052784769s: waiting for machine to come up
	I0829 19:35:20.670356   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:22.670699   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.786910   78865 start.go:364] duration metric: took 49.670356886s to acquireMachinesLock for "no-preload-690795"
	I0829 19:35:27.786963   78865 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:27.786975   78865 fix.go:54] fixHost starting: 
	I0829 19:35:27.787377   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:27.787425   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:27.803558   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0829 19:35:27.803903   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:27.804328   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:35:27.804348   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:27.804623   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:27.804824   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:27.804967   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:35:27.806332   78865 fix.go:112] recreateIfNeeded on no-preload-690795: state=Stopped err=<nil>
	I0829 19:35:27.806353   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	W0829 19:35:27.806525   78865 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:27.808678   78865 out.go:177] * Restarting existing kvm2 VM for "no-preload-690795" ...
	I0829 19:35:25.407622   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.910410   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:26.649643   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650117   79869 main.go:141] libmachine: (old-k8s-version-467349) Found IP for machine: 192.168.72.112
	I0829 19:35:26.650146   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserving static IP address...
	I0829 19:35:26.650161   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has current primary IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650553   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.650579   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserved static IP address: 192.168.72.112
	I0829 19:35:26.650600   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | skip adding static IP to network mk-old-k8s-version-467349 - found existing host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"}
	I0829 19:35:26.650611   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting for SSH to be available...
	I0829 19:35:26.650640   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Getting to WaitForSSH function...
	I0829 19:35:26.653157   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653509   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.653528   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653667   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH client type: external
	I0829 19:35:26.653690   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa (-rw-------)
	I0829 19:35:26.653724   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:26.653741   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | About to run SSH command:
	I0829 19:35:26.653755   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | exit 0
	I0829 19:35:26.778126   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:26.778436   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetConfigRaw
	I0829 19:35:26.779002   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:26.781392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.781745   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.781778   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.782006   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:35:26.782229   79869 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:26.782249   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:26.782509   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.784806   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785130   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.785148   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785300   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.785462   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785611   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785799   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.785923   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.786126   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.786138   79869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:26.886223   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:26.886256   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886522   79869 buildroot.go:166] provisioning hostname "old-k8s-version-467349"
	I0829 19:35:26.886563   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886756   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.889874   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890304   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.890324   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890471   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.890655   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890821   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890969   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.891131   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.891333   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.891348   79869 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467349 && echo "old-k8s-version-467349" | sudo tee /etc/hostname
	I0829 19:35:27.007493   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467349
	
	I0829 19:35:27.007535   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.010202   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010526   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.010548   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010737   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.010913   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011080   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011225   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.011395   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.011548   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.011564   79869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467349/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:27.123357   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:27.123385   79869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:27.123436   79869 buildroot.go:174] setting up certificates
	I0829 19:35:27.123445   79869 provision.go:84] configureAuth start
	I0829 19:35:27.123455   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:27.123760   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.126486   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.126819   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.126857   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.127013   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.129089   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129404   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.129429   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129554   79869 provision.go:143] copyHostCerts
	I0829 19:35:27.129614   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:27.129636   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:27.129704   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:27.129825   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:27.129840   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:27.129871   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:27.129946   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:27.129956   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:27.129982   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:27.130043   79869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467349 san=[127.0.0.1 192.168.72.112 localhost minikube old-k8s-version-467349]
	I0829 19:35:27.190556   79869 provision.go:177] copyRemoteCerts
	I0829 19:35:27.190610   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:27.190667   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.193785   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194205   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.194243   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194406   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.194620   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.194788   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.194962   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.276099   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:27.299820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 19:35:27.323625   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:27.347943   79869 provision.go:87] duration metric: took 224.487094ms to configureAuth
	I0829 19:35:27.347970   79869 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:27.348140   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:35:27.348203   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.351042   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.351420   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351654   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.351860   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352030   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352159   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.352321   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.352487   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.352504   79869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:27.565849   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:27.565874   79869 machine.go:96] duration metric: took 783.631791ms to provisionDockerMachine
	I0829 19:35:27.565886   79869 start.go:293] postStartSetup for "old-k8s-version-467349" (driver="kvm2")
	I0829 19:35:27.565897   79869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:27.565935   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.566274   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:27.566332   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.568900   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569225   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.569258   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569424   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.569613   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.569795   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.569961   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.648057   79869 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:27.651955   79869 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:27.651984   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:27.652057   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:27.652167   79869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:27.652311   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:27.660961   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:27.684179   79869 start.go:296] duration metric: took 118.281042ms for postStartSetup
	I0829 19:35:27.684251   79869 fix.go:56] duration metric: took 17.69321583s for fixHost
	I0829 19:35:27.684277   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.686877   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687235   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.687266   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687429   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.687615   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687751   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687863   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.687994   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.688202   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.688220   79869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:27.786754   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960127.745017542
	
	I0829 19:35:27.786773   79869 fix.go:216] guest clock: 1724960127.745017542
	I0829 19:35:27.786780   79869 fix.go:229] Guest: 2024-08-29 19:35:27.745017542 +0000 UTC Remote: 2024-08-29 19:35:27.684258077 +0000 UTC m=+208.981895804 (delta=60.759465ms)
	I0829 19:35:27.786798   79869 fix.go:200] guest clock delta is within tolerance: 60.759465ms
	I0829 19:35:27.786803   79869 start.go:83] releasing machines lock for "old-k8s-version-467349", held for 17.795804036s
	I0829 19:35:27.786823   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.787066   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.789617   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.789937   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.789967   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.790124   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790514   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790689   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790781   79869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:27.790827   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.790912   79869 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:27.790937   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.793406   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793495   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793732   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793762   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793781   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793821   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793910   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794075   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794076   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794242   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794419   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.794435   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794646   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794811   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.910665   79869 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:27.916917   79869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:28.063525   79869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:28.070848   79869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:28.070907   79869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:28.089204   79869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:28.089226   79869 start.go:495] detecting cgroup driver to use...
	I0829 19:35:28.089291   79869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:28.108528   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:28.122248   79869 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:28.122353   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:28.143014   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:28.159322   79869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:28.281356   79869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:28.445101   79869 docker.go:233] disabling docker service ...
	I0829 19:35:28.445162   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:28.460437   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:28.474849   79869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:28.609747   79869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:28.734733   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:25.170397   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.669465   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:28.748605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:28.766945   79869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 19:35:28.767014   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.776535   79869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:28.776598   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.787050   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.797552   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.807575   79869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:28.818319   79869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:28.827289   79869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:28.827342   79869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:28.839995   79869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:28.849779   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:28.979701   79869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:29.092264   79869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:29.092344   79869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:29.097310   79869 start.go:563] Will wait 60s for crictl version
	I0829 19:35:29.097366   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:29.101080   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:29.146142   79869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:29.146228   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.176037   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.210024   79869 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 19:35:27.810111   78865 main.go:141] libmachine: (no-preload-690795) Calling .Start
	I0829 19:35:27.810300   78865 main.go:141] libmachine: (no-preload-690795) Ensuring networks are active...
	I0829 19:35:27.811063   78865 main.go:141] libmachine: (no-preload-690795) Ensuring network default is active
	I0829 19:35:27.811464   78865 main.go:141] libmachine: (no-preload-690795) Ensuring network mk-no-preload-690795 is active
	I0829 19:35:27.811848   78865 main.go:141] libmachine: (no-preload-690795) Getting domain xml...
	I0829 19:35:27.812590   78865 main.go:141] libmachine: (no-preload-690795) Creating domain...
	I0829 19:35:29.131821   78865 main.go:141] libmachine: (no-preload-690795) Waiting to get IP...
	I0829 19:35:29.132876   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.133519   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.133595   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.133481   80876 retry.go:31] will retry after 252.123266ms: waiting for machine to come up
	I0829 19:35:29.387046   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.387534   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.387561   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.387496   80876 retry.go:31] will retry after 304.157394ms: waiting for machine to come up
	I0829 19:35:29.693891   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.694581   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.694603   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.694560   80876 retry.go:31] will retry after 366.980614ms: waiting for machine to come up
	I0829 19:35:30.063032   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:30.063466   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:30.063504   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:30.063431   80876 retry.go:31] will retry after 562.46082ms: waiting for machine to come up
	I0829 19:35:30.412868   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:32.908366   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:33.408823   79559 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.408848   79559 pod_ready.go:82] duration metric: took 10.007525744s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.408862   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tlc89" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.418176   79559 pod_ready.go:93] pod "kube-proxy-tlc89" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.418202   79559 pod_ready.go:82] duration metric: took 9.33136ms for pod "kube-proxy-tlc89" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.418214   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.424362   79559 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.424388   79559 pod_ready.go:82] duration metric: took 6.165646ms for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.424401   79559 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:29.211072   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:29.214489   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.214897   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:29.214932   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.215196   79869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:29.219742   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:29.233815   79869 kubeadm.go:883] updating cluster {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:29.233934   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:35:29.233994   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:29.281512   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:29.281579   79869 ssh_runner.go:195] Run: which lz4
	I0829 19:35:29.285825   79869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:29.290303   79869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:29.290349   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 19:35:30.843642   79869 crio.go:462] duration metric: took 1.557868582s to copy over tarball
	I0829 19:35:30.843714   79869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:29.670803   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:32.171154   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:30.627531   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:30.628123   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:30.628147   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:30.628030   80876 retry.go:31] will retry after 488.97189ms: waiting for machine to come up
	I0829 19:35:31.118901   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:31.119457   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:31.119480   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:31.119398   80876 retry.go:31] will retry after 801.189699ms: waiting for machine to come up
	I0829 19:35:31.921939   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:31.922447   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:31.922482   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:31.922391   80876 retry.go:31] will retry after 828.788864ms: waiting for machine to come up
	I0829 19:35:32.752986   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:32.753429   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:32.753465   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:32.753385   80876 retry.go:31] will retry after 1.404436811s: waiting for machine to come up
	I0829 19:35:34.159129   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:34.159714   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:34.159741   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:34.159678   80876 retry.go:31] will retry after 1.312099391s: waiting for machine to come up
	I0829 19:35:35.473045   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:35.473510   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:35.473549   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:35.473461   80876 retry.go:31] will retry after 1.46129368s: waiting for machine to come up
	I0829 19:35:35.431524   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:37.437993   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:33.827965   79869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984226389s)
	I0829 19:35:33.827993   79869 crio.go:469] duration metric: took 2.98432047s to extract the tarball
	I0829 19:35:33.828004   79869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:33.869606   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:33.902753   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:33.902782   79869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:33.902862   79869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.902867   79869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.902869   79869 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.902882   79869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:33.903054   79869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.903000   79869 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 19:35:33.902955   79869 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.902978   79869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.904938   79869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904960   79869 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 19:35:33.904917   79869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.904920   79869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.159604   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 19:35:34.195935   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.208324   79869 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 19:35:34.208373   79869 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 19:35:34.208414   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.229776   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.231728   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.241303   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.243523   79869 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 19:35:34.243572   79869 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.243589   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.243612   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.256377   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.291584   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.339295   79869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 19:35:34.339344   79869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.339396   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364510   79869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 19:35:34.364559   79869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.364565   79869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 19:35:34.364598   79869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.364608   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364636   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.364641   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.364642   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.370545   79869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 19:35:34.370580   79869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.370621   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.401578   79869 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 19:35:34.401628   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.401634   79869 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.401651   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.401669   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.452408   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.452472   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.452530   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.452479   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.498680   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.502698   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.502722   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.608235   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.608332   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 19:35:34.608345   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.608302   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.647702   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.647744   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.647784   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.771634   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.771691   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 19:35:34.771642   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.771742   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 19:35:34.771818   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 19:35:34.790517   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.826666   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 19:35:34.832449   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 19:35:34.850172   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 19:35:35.112084   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:35.251873   79869 cache_images.go:92] duration metric: took 1.34907399s to LoadCachedImages
	W0829 19:35:35.251967   79869 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0829 19:35:35.251984   79869 kubeadm.go:934] updating node { 192.168.72.112 8443 v1.20.0 crio true true} ...
	I0829 19:35:35.252130   79869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467349 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:35.252215   79869 ssh_runner.go:195] Run: crio config
	I0829 19:35:35.307174   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:35:35.307205   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:35.307229   79869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:35.307253   79869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467349 NodeName:old-k8s-version-467349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 19:35:35.307421   79869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467349"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:35.307498   79869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 19:35:35.317493   79869 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:35.317574   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:35.327102   79869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 19:35:35.343936   79869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:35.362420   79869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 19:35:35.379862   79869 ssh_runner.go:195] Run: grep 192.168.72.112	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:35.383595   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:35.396175   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:35.513069   79869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:35.535454   79869 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349 for IP: 192.168.72.112
	I0829 19:35:35.535481   79869 certs.go:194] generating shared ca certs ...
	I0829 19:35:35.535500   79869 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:35.535693   79869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:35.535751   79869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:35.535764   79869 certs.go:256] generating profile certs ...
	I0829 19:35:35.535885   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.key
	I0829 19:35:35.535962   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key.b97fdb0f
	I0829 19:35:35.536010   79869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key
	I0829 19:35:35.536160   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:35.536198   79869 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:35.536212   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:35.536255   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:35.536289   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:35.536345   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:35.536403   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:35.537270   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:35.573137   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:35.605232   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:35.633800   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:35.681773   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 19:35:35.711207   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:35.748040   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:35.774144   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:35:35.805029   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:35.833761   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:35.856820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:35.883402   79869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:35.902258   79869 ssh_runner.go:195] Run: openssl version
	I0829 19:35:35.908223   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:35.919106   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923368   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923414   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.930431   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:35.941856   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:35.953186   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957279   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957351   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.963886   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:35.976058   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:35.986836   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991417   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991482   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.997160   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:36.009731   79869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:36.015343   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:36.022897   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:36.028976   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:36.036658   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:36.042513   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:36.048085   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:36.053863   79869 kubeadm.go:392] StartCluster: {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:36.053944   79869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:36.053999   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.099158   79869 cri.go:89] found id: ""
	I0829 19:35:36.099230   79869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:36.109678   79869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:36.109701   79869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:36.109751   79869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:36.119674   79869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:36.120829   79869 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-467349" does not appear in /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:35:36.121495   79869 kubeconfig.go:62] /home/jenkins/minikube-integration/19531-13056/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-467349" cluster setting kubeconfig missing "old-k8s-version-467349" context setting]
	I0829 19:35:36.122505   79869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:36.221053   79869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:36.232505   79869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.112
	I0829 19:35:36.232550   79869 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:36.232562   79869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:36.232612   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.272228   79869 cri.go:89] found id: ""
	I0829 19:35:36.272290   79869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:36.290945   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:36.301665   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:36.301688   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:36.301740   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:35:36.311828   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:36.311882   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:36.322539   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:35:36.331879   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:36.331947   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:36.343057   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.352806   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:36.352867   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.362158   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:35:36.372280   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:36.372355   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:36.383178   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:36.393699   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:36.514064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.332360   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.570906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.665203   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.764043   79869 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:37.764146   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:38.264990   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:34.172082   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:36.669124   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:38.669696   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:36.936034   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:36.936510   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:36.936539   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:36.936463   80876 retry.go:31] will retry after 1.943807762s: waiting for machine to come up
	I0829 19:35:38.881644   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:38.882110   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:38.882133   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:38.882067   80876 retry.go:31] will retry after 3.173912619s: waiting for machine to come up
	I0829 19:35:39.932725   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:42.429439   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:38.764741   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.264314   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.765085   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.264910   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.264207   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.764841   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.265060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.764958   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:43.264971   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.168816   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:43.669594   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:42.059140   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:42.059668   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:42.059692   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:42.059602   80876 retry.go:31] will retry after 4.193427915s: waiting for machine to come up
	I0829 19:35:44.430473   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:46.431149   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:43.764674   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.264893   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.764345   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.264234   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.764985   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.265107   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.764222   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.264350   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.764787   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:48.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.671012   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:48.168836   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:46.256270   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.256783   78865 main.go:141] libmachine: (no-preload-690795) Found IP for machine: 192.168.39.76
	I0829 19:35:46.256806   78865 main.go:141] libmachine: (no-preload-690795) Reserving static IP address...
	I0829 19:35:46.256822   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has current primary IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.257249   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "no-preload-690795", mac: "52:54:00:2b:48:ed", ip: "192.168.39.76"} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.257274   78865 main.go:141] libmachine: (no-preload-690795) Reserved static IP address: 192.168.39.76
	I0829 19:35:46.257289   78865 main.go:141] libmachine: (no-preload-690795) DBG | skip adding static IP to network mk-no-preload-690795 - found existing host DHCP lease matching {name: "no-preload-690795", mac: "52:54:00:2b:48:ed", ip: "192.168.39.76"}
	I0829 19:35:46.257299   78865 main.go:141] libmachine: (no-preload-690795) Waiting for SSH to be available...
	I0829 19:35:46.257313   78865 main.go:141] libmachine: (no-preload-690795) DBG | Getting to WaitForSSH function...
	I0829 19:35:46.259334   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.259664   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.259692   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.259788   78865 main.go:141] libmachine: (no-preload-690795) DBG | Using SSH client type: external
	I0829 19:35:46.259821   78865 main.go:141] libmachine: (no-preload-690795) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa (-rw-------)
	I0829 19:35:46.259859   78865 main.go:141] libmachine: (no-preload-690795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:46.259871   78865 main.go:141] libmachine: (no-preload-690795) DBG | About to run SSH command:
	I0829 19:35:46.259902   78865 main.go:141] libmachine: (no-preload-690795) DBG | exit 0
	I0829 19:35:46.389869   78865 main.go:141] libmachine: (no-preload-690795) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:46.390295   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetConfigRaw
	I0829 19:35:46.390987   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:46.393890   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.394310   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.394342   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.394673   78865 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/config.json ...
	I0829 19:35:46.394846   78865 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:46.394869   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:46.395082   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.397203   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.397508   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.397535   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.397676   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.397862   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.398011   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.398178   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.398314   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.398475   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.398486   78865 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:46.502132   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:46.502163   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.502426   78865 buildroot.go:166] provisioning hostname "no-preload-690795"
	I0829 19:35:46.502449   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.502642   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.505084   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.505414   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.505443   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.505665   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.505861   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.506035   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.506219   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.506379   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.506573   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.506597   78865 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-690795 && echo "no-preload-690795" | sudo tee /etc/hostname
	I0829 19:35:46.627246   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-690795
	
	I0829 19:35:46.627269   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.630081   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.630430   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.630454   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.630611   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.630780   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.630947   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.631233   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.631397   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.631545   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.631568   78865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-690795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-690795/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-690795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:46.746055   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:46.746106   78865 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:46.746131   78865 buildroot.go:174] setting up certificates
	I0829 19:35:46.746143   78865 provision.go:84] configureAuth start
	I0829 19:35:46.746160   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.746411   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:46.749125   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.749476   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.749497   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.749642   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.751828   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.752178   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.752203   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.752317   78865 provision.go:143] copyHostCerts
	I0829 19:35:46.752384   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:46.752404   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:46.752475   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:46.752580   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:46.752591   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:46.752619   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:46.752693   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:46.752703   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:46.752728   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:46.752791   78865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.no-preload-690795 san=[127.0.0.1 192.168.39.76 localhost minikube no-preload-690795]
	I0829 19:35:46.901689   78865 provision.go:177] copyRemoteCerts
	I0829 19:35:46.901744   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:46.901764   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.904873   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.905241   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.905287   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.905458   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.905657   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.905805   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.905960   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:46.988181   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:47.011149   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 19:35:47.034849   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:47.057375   78865 provision.go:87] duration metric: took 311.217634ms to configureAuth
	I0829 19:35:47.057402   78865 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:47.057599   78865 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:35:47.057695   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.060274   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.060594   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.060620   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.060750   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.060976   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.061149   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.061311   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.061465   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:47.061676   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:47.061703   78865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:47.284836   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:47.284862   78865 machine.go:96] duration metric: took 890.004565ms to provisionDockerMachine
	I0829 19:35:47.284876   78865 start.go:293] postStartSetup for "no-preload-690795" (driver="kvm2")
	I0829 19:35:47.284889   78865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:47.284909   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.285207   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:47.285232   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.287875   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.288162   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.288180   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.288391   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.288597   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.288772   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.288899   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.372833   78865 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:47.376649   78865 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:47.376670   78865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:47.376729   78865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:47.376801   78865 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:47.376881   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:47.385721   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:47.407601   78865 start.go:296] duration metric: took 122.711153ms for postStartSetup
	I0829 19:35:47.407640   78865 fix.go:56] duration metric: took 19.620666095s for fixHost
	I0829 19:35:47.407673   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.410483   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.410873   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.410903   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.411139   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.411363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.411527   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.411674   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.411830   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:47.411987   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:47.412001   78865 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:47.518841   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960147.499237123
	
	I0829 19:35:47.518864   78865 fix.go:216] guest clock: 1724960147.499237123
	I0829 19:35:47.518872   78865 fix.go:229] Guest: 2024-08-29 19:35:47.499237123 +0000 UTC Remote: 2024-08-29 19:35:47.407643858 +0000 UTC m=+351.882891548 (delta=91.593265ms)
	I0829 19:35:47.518891   78865 fix.go:200] guest clock delta is within tolerance: 91.593265ms
	I0829 19:35:47.518896   78865 start.go:83] releasing machines lock for "no-preload-690795", held for 19.731957743s
	I0829 19:35:47.518914   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.519214   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:47.521738   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.522125   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.522153   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.522310   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.522806   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.523016   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.523082   78865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:47.523127   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.523209   78865 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:47.523225   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.526076   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526443   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.526462   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526489   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526681   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.526826   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.527005   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.527036   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.527073   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.527199   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.527197   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.527370   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.527537   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.527690   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.635450   78865 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:47.641274   78865 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:47.788805   78865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:47.794545   78865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:47.794601   78865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:47.810156   78865 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:47.810175   78865 start.go:495] detecting cgroup driver to use...
	I0829 19:35:47.810228   78865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:47.825795   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:47.839011   78865 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:47.839061   78865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:47.851854   78865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:47.864467   78865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:47.999155   78865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:48.143858   78865 docker.go:233] disabling docker service ...
	I0829 19:35:48.143921   78865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:48.157740   78865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:48.172067   78865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:48.339557   78865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:48.462950   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:48.475646   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:48.492262   78865 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:35:48.492329   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.501580   78865 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:48.501647   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.511241   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.520477   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.530413   78865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:48.540457   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.551258   78865 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.567365   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.577266   78865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:48.586423   78865 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:48.586479   78865 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:48.599527   78865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:48.608666   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:48.721808   78865 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:48.811417   78865 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:48.811495   78865 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:48.816689   78865 start.go:563] Will wait 60s for crictl version
	I0829 19:35:48.816750   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:48.820563   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:48.862786   78865 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:48.862869   78865 ssh_runner.go:195] Run: crio --version
	I0829 19:35:48.889834   78865 ssh_runner.go:195] Run: crio --version
	I0829 19:35:48.918515   78865 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:35:48.919643   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:48.922182   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:48.922530   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:48.922560   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:48.922725   78865 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:48.926877   78865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:48.939254   78865 kubeadm.go:883] updating cluster {Name:no-preload-690795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:48.939379   78865 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:35:48.939413   78865 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:48.972281   78865 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:35:48.972304   78865 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:48.972345   78865 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.972361   78865 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:48.972384   78865 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:48.972425   78865 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:48.972443   78865 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:48.972452   78865 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 19:35:48.972496   78865 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:48.972558   78865 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:48.973929   78865 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:48.973938   78865 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.973979   78865 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 19:35:48.973933   78865 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:48.973931   78865 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:48.973932   78865 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:48.973938   78865 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:48.973939   78865 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.229315   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 19:35:49.232334   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.271261   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.328903   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.339435   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.349057   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.356840   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.387705   78865 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 19:35:49.387748   78865 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 19:35:49.387760   78865 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.387777   78865 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.387808   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.387829   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.389731   78865 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 19:35:49.389769   78865 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.389809   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.438231   78865 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 19:35:49.438264   78865 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.438304   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.453177   78865 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 19:35:49.453220   78865 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.453270   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.455713   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.455767   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.455802   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.455804   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.455772   78865 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 19:35:49.455895   78865 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.455921   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.458141   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.539090   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.539125   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.568605   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.573575   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.573622   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.573575   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.678619   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.680581   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.680584   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.680671   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.699638   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.706556   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.803909   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.809759   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 19:35:49.809863   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.810356   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 19:35:49.810423   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:49.811234   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 19:35:49.811285   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:49.832040   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 19:35:49.832102   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 19:35:49.832153   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:49.832162   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:49.862517   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 19:35:49.862537   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.862578   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.862653   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 19:35:49.862696   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 19:35:49.862703   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 19:35:49.862731   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 19:35:49.862760   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 19:35:49.862788   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:35:50.192890   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.930928   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:50.931805   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:53.430716   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:48.764746   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.264755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.764703   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.264240   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.764284   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.265111   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.764316   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.264213   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.764295   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:53.264451   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.168967   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:52.169327   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:51.820978   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.958376621s)
	I0829 19:35:51.821014   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 19:35:51.821035   78865 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:51.821077   78865 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.958265625s)
	I0829 19:35:51.821109   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:51.821108   78865 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.62819044s)
	I0829 19:35:51.821211   78865 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 19:35:51.821243   78865 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:51.821275   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:51.821111   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 19:35:55.931182   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:58.431477   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:53.764946   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.265076   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.764273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.264844   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.764622   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.765120   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.265199   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.764610   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:58.264296   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.669752   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:56.670764   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:55.594240   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.773093303s)
	I0829 19:35:55.594275   78865 ssh_runner.go:235] Completed: which crictl: (3.77298113s)
	I0829 19:35:55.594290   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 19:35:55.594340   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:55.594348   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:55.594403   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:57.972145   78865 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.377784997s)
	I0829 19:35:57.972180   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.377757134s)
	I0829 19:35:57.972210   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 19:35:57.972223   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:57.972237   78865 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:57.972270   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:58.025853   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:59.843856   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.871560481s)
	I0829 19:35:59.843883   78865 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.818003416s)
	I0829 19:35:59.843887   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 19:35:59.843915   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 19:35:59.843925   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:59.844004   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:35:59.844019   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:59.849625   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 19:36:00.432638   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.078312   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:58.765060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.265033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.765033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.265144   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.764425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.764672   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.264962   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:03.264407   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.170365   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:01.668465   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.670347   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:01.294196   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.450154791s)
	I0829 19:36:01.294230   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 19:36:01.294273   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:36:01.294336   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:36:03.144937   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.850574318s)
	I0829 19:36:03.144978   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 19:36:03.145018   78865 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:36:03.145081   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:36:03.803763   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 19:36:03.803802   78865 cache_images.go:123] Successfully loaded all cached images
	I0829 19:36:03.803807   78865 cache_images.go:92] duration metric: took 14.831492974s to LoadCachedImages
	I0829 19:36:03.803818   78865 kubeadm.go:934] updating node { 192.168.39.76 8443 v1.31.0 crio true true} ...
	I0829 19:36:03.803927   78865 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-690795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:36:03.803988   78865 ssh_runner.go:195] Run: crio config
	I0829 19:36:03.854859   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:36:03.854879   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:36:03.854894   78865 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:36:03.854915   78865 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-690795 NodeName:no-preload-690795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:36:03.855055   78865 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-690795"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.76
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:36:03.855114   78865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:36:03.865163   78865 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:36:03.865236   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:36:03.874348   78865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0829 19:36:03.891540   78865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:36:03.908488   78865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0829 19:36:03.926440   78865 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0829 19:36:03.930270   78865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:36:03.942353   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:36:04.066646   78865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:36:04.083872   78865 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795 for IP: 192.168.39.76
	I0829 19:36:04.083901   78865 certs.go:194] generating shared ca certs ...
	I0829 19:36:04.083921   78865 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:36:04.084106   78865 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:36:04.084172   78865 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:36:04.084186   78865 certs.go:256] generating profile certs ...
	I0829 19:36:04.084307   78865 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/client.key
	I0829 19:36:04.084432   78865 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.key.8a2db174
	I0829 19:36:04.084492   78865 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.key
	I0829 19:36:04.084656   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:36:04.084705   78865 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:36:04.084718   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:36:04.084753   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:36:04.084790   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:36:04.084827   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:36:04.084883   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:36:04.085744   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:36:04.124689   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:36:04.158769   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:36:04.188748   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:36:04.217577   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:36:04.251166   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:36:04.282961   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:36:04.306431   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:36:04.329260   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:36:04.365050   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:36:04.393054   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:36:04.417384   78865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:36:04.434555   78865 ssh_runner.go:195] Run: openssl version
	I0829 19:36:04.440074   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:36:04.451378   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.455603   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.455655   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.461114   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:36:04.472522   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:36:04.483064   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.487316   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.487383   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.492860   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:36:04.504284   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:36:04.515522   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.519853   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.519908   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.525240   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:36:04.536612   78865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:36:04.540905   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:36:04.546622   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:36:04.552303   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:36:04.558306   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:36:04.564129   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:36:04.569635   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:36:04.575196   78865 kubeadm.go:392] StartCluster: {Name:no-preload-690795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:36:04.575279   78865 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:36:04.575360   78865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:36:04.619563   78865 cri.go:89] found id: ""
	I0829 19:36:04.619638   78865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:36:04.629655   78865 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:36:04.629675   78865 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:36:04.629785   78865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:36:04.638771   78865 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:36:04.639763   78865 kubeconfig.go:125] found "no-preload-690795" server: "https://192.168.39.76:8443"
	I0829 19:36:04.641783   78865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:36:04.650605   78865 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.76
	I0829 19:36:04.650634   78865 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:36:04.650644   78865 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:36:04.650693   78865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:36:04.685589   78865 cri.go:89] found id: ""
	I0829 19:36:04.685656   78865 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:36:04.702584   78865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:36:04.711693   78865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:36:04.711712   78865 kubeadm.go:157] found existing configuration files:
	
	I0829 19:36:04.711753   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:36:04.720291   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:36:04.720349   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:36:04.729301   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:36:04.739449   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:36:04.739513   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:36:04.748786   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:36:04.757128   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:36:04.757175   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:36:04.767533   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:36:04.777322   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:36:04.777373   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:36:04.786269   78865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:36:04.795387   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:04.904530   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:05.430803   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:07.431525   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.764403   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.764546   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.265205   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.764700   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.264837   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.764871   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.264506   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.765230   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:08.265050   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.169466   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:08.669719   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:05.750216   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:05.949551   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:06.043930   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:06.140396   78865 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:36:06.140505   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.641069   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.141458   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.161360   78865 api_server.go:72] duration metric: took 1.020963124s to wait for apiserver process to appear ...
	I0829 19:36:07.161390   78865 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:36:07.161426   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.327675   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:36:10.327707   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:36:10.327721   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.396704   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:10.396737   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:10.661699   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.666518   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:10.666544   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:11.162227   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:11.167736   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:11.167774   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:11.662428   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:11.668688   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:11.668727   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:12.162372   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:12.168297   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0829 19:36:12.175933   78865 api_server.go:141] control plane version: v1.31.0
	I0829 19:36:12.175956   78865 api_server.go:131] duration metric: took 5.014557664s to wait for apiserver health ...
	I0829 19:36:12.175967   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:36:12.175975   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:36:12.177903   78865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:36:09.930962   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:11.932180   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:08.764431   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.264876   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.764481   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.265100   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.764720   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.264283   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.764890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.264425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.764965   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:13.264557   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.669915   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:13.169150   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:12.179056   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:36:12.202639   78865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:36:12.221804   78865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:36:12.242859   78865 system_pods.go:59] 8 kube-system pods found
	I0829 19:36:12.242897   78865 system_pods.go:61] "coredns-6f6b679f8f-j8zzh" [01eaffa5-a976-441c-987c-bdf3b7f72cd6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:36:12.242905   78865 system_pods.go:61] "etcd-no-preload-690795" [df54ae59-44ff-4f7b-b6c0-6145bdae3e44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:36:12.242912   78865 system_pods.go:61] "kube-apiserver-no-preload-690795" [aee247f2-1381-4571-a671-2cf140c78196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:36:12.242919   78865 system_pods.go:61] "kube-controller-manager-no-preload-690795" [69244a85-2778-46c8-a95c-d0f8a264c0cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:36:12.242923   78865 system_pods.go:61] "kube-proxy-q4mbt" [985478f9-235d-4922-a7fd-a0cbdddf3f68] Running
	I0829 19:36:12.242934   78865 system_pods.go:61] "kube-scheduler-no-preload-690795" [e1e141ab-eb79-4c87-bccd-274f1e7495b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:36:12.242940   78865 system_pods.go:61] "metrics-server-6867b74b74-svnwn" [e096a3dc-1166-4ee3-9f3f-e044064a5a13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:36:12.242945   78865 system_pods.go:61] "storage-provisioner" [6fc868fa-2221-45ad-903e-cd3d2297a3e6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:36:12.242952   78865 system_pods.go:74] duration metric: took 21.125083ms to wait for pod list to return data ...
	I0829 19:36:12.242962   78865 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:36:12.253567   78865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:36:12.253598   78865 node_conditions.go:123] node cpu capacity is 2
	I0829 19:36:12.253612   78865 node_conditions.go:105] duration metric: took 10.645029ms to run NodePressure ...
	I0829 19:36:12.253634   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:12.514683   78865 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:36:12.520060   78865 kubeadm.go:739] kubelet initialised
	I0829 19:36:12.520082   78865 kubeadm.go:740] duration metric: took 5.371928ms waiting for restarted kubelet to initialise ...
	I0829 19:36:12.520088   78865 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:36:12.524795   78865 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:14.533484   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:14.430676   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:16.930723   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:13.765038   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.264547   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.764878   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.264485   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.765114   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.264694   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.764599   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.264540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.764523   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:18.264855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.668846   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:17.669308   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:17.031326   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:19.530568   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:19.430550   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:21.431080   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:23.431736   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:18.764781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.264280   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.764653   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.264908   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.764855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.265180   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.764470   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.264751   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.765034   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:23.264498   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.168590   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:22.168898   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:21.531983   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:22.032162   78865 pod_ready.go:93] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:22.032187   78865 pod_ready.go:82] duration metric: took 9.507358099s for pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:22.032200   78865 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.038935   78865 pod_ready.go:93] pod "etcd-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.038956   78865 pod_ready.go:82] duration metric: took 1.006750868s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.038966   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.043258   78865 pod_ready.go:93] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.043278   78865 pod_ready.go:82] duration metric: took 4.305789ms for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.043298   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.049140   78865 pod_ready.go:93] pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.049159   78865 pod_ready.go:82] duration metric: took 5.852855ms for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.049170   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-q4mbt" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.055033   78865 pod_ready.go:93] pod "kube-proxy-q4mbt" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.055054   78865 pod_ready.go:82] duration metric: took 5.87681ms for pod "kube-proxy-q4mbt" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.055067   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.229706   78865 pod_ready.go:93] pod "kube-scheduler-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.229734   78865 pod_ready.go:82] duration metric: took 174.6598ms for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.229748   78865 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:25.237141   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:25.930818   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.430312   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:23.764384   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.265090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.765183   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.264966   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.764429   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.264774   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.765090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.264524   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.764810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:28.264541   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.169024   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:26.169599   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.668840   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:27.736899   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:30.235632   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:30.430611   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.930362   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.764771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.764735   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.265228   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.764328   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.264312   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.764627   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.264891   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.765104   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:33.264462   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.669561   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.671106   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.236488   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:34.736240   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:34.931264   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:37.430665   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:33.764540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.265004   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.764934   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.264439   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.764982   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.264780   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.765081   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.264865   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.764612   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:37.764705   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:37.803674   79869 cri.go:89] found id: ""
	I0829 19:36:37.803704   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.803715   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:37.803724   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:37.803783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:37.836465   79869 cri.go:89] found id: ""
	I0829 19:36:37.836494   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.836504   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:37.836512   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:37.836574   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:37.870224   79869 cri.go:89] found id: ""
	I0829 19:36:37.870248   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.870256   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:37.870262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:37.870326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:37.904152   79869 cri.go:89] found id: ""
	I0829 19:36:37.904179   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.904187   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:37.904194   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:37.904267   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:37.939182   79869 cri.go:89] found id: ""
	I0829 19:36:37.939211   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.939220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:37.939228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:37.939293   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:37.975761   79869 cri.go:89] found id: ""
	I0829 19:36:37.975790   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.975800   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:37.975808   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:37.975910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:38.008407   79869 cri.go:89] found id: ""
	I0829 19:36:38.008430   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.008437   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:38.008444   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:38.008497   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:38.041327   79869 cri.go:89] found id: ""
	I0829 19:36:38.041360   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.041370   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:38.041381   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:38.041395   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:38.091167   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:38.091214   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:38.105093   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:38.105126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:38.227564   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:38.227599   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:38.227616   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:38.298287   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:38.298327   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:35.172336   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:37.671072   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:36.736855   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:38.736902   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:39.929907   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:41.930998   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:40.836221   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:40.849288   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:40.849357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:40.882705   79869 cri.go:89] found id: ""
	I0829 19:36:40.882732   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.882739   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:40.882745   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:40.882791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:40.917639   79869 cri.go:89] found id: ""
	I0829 19:36:40.917667   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.917679   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:40.917687   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:40.917738   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:40.953804   79869 cri.go:89] found id: ""
	I0829 19:36:40.953843   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.953854   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:40.953863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:40.953925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:40.987341   79869 cri.go:89] found id: ""
	I0829 19:36:40.987376   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.987388   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:40.987396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:40.987462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:41.026247   79869 cri.go:89] found id: ""
	I0829 19:36:41.026277   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.026290   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:41.026303   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:41.026372   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:41.064160   79869 cri.go:89] found id: ""
	I0829 19:36:41.064185   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.064194   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:41.064201   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:41.064278   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:41.115081   79869 cri.go:89] found id: ""
	I0829 19:36:41.115113   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.115124   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:41.115131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:41.115206   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:41.165472   79869 cri.go:89] found id: ""
	I0829 19:36:41.165501   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.165511   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:41.165521   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:41.165536   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:41.219322   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:41.219357   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:41.232410   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:41.232443   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:41.296216   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:41.296235   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:41.296246   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:41.375784   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:41.375824   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:40.169548   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:42.672996   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:41.236777   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.736150   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.931489   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:45.933439   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:48.431152   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.914181   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:43.926643   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:43.926716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:43.963266   79869 cri.go:89] found id: ""
	I0829 19:36:43.963289   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.963297   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:43.963303   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:43.963350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:43.998886   79869 cri.go:89] found id: ""
	I0829 19:36:43.998917   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.998926   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:43.998930   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:43.998975   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:44.033142   79869 cri.go:89] found id: ""
	I0829 19:36:44.033174   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.033183   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:44.033189   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:44.033244   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:44.066986   79869 cri.go:89] found id: ""
	I0829 19:36:44.067019   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.067031   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:44.067038   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:44.067106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:44.100228   79869 cri.go:89] found id: ""
	I0829 19:36:44.100261   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.100272   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:44.100279   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:44.100340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:44.134511   79869 cri.go:89] found id: ""
	I0829 19:36:44.134536   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.134543   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:44.134549   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:44.134615   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:44.170586   79869 cri.go:89] found id: ""
	I0829 19:36:44.170619   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.170631   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:44.170639   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:44.170692   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:44.205349   79869 cri.go:89] found id: ""
	I0829 19:36:44.205377   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.205388   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:44.205398   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:44.205413   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:44.218874   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:44.218903   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:44.294221   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:44.294241   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:44.294253   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:44.373258   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:44.373293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:44.414355   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:44.414384   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:46.964371   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:46.976756   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:46.976827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:47.009512   79869 cri.go:89] found id: ""
	I0829 19:36:47.009537   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.009547   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:47.009555   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:47.009608   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:47.042141   79869 cri.go:89] found id: ""
	I0829 19:36:47.042177   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.042190   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:47.042199   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:47.042265   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:47.074680   79869 cri.go:89] found id: ""
	I0829 19:36:47.074707   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.074718   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:47.074726   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:47.074783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:47.107014   79869 cri.go:89] found id: ""
	I0829 19:36:47.107042   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.107051   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:47.107059   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:47.107107   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:47.139770   79869 cri.go:89] found id: ""
	I0829 19:36:47.139795   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.139804   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:47.139810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:47.139862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:47.174463   79869 cri.go:89] found id: ""
	I0829 19:36:47.174502   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.174521   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:47.174532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:47.174580   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:47.206935   79869 cri.go:89] found id: ""
	I0829 19:36:47.206958   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.206966   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:47.206972   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:47.207035   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:47.250798   79869 cri.go:89] found id: ""
	I0829 19:36:47.250822   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.250829   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:47.250836   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:47.250847   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:47.320803   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:47.320824   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:47.320850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:47.394344   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:47.394379   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:47.439451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:47.439481   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:47.491070   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:47.491106   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:45.169686   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:47.169784   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:46.236187   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:48.736605   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.431543   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:52.931361   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.006196   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:50.020169   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:50.020259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:50.059323   79869 cri.go:89] found id: ""
	I0829 19:36:50.059353   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.059373   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:50.059380   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:50.059442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:50.095389   79869 cri.go:89] found id: ""
	I0829 19:36:50.095419   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.095430   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:50.095437   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:50.095499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:50.128133   79869 cri.go:89] found id: ""
	I0829 19:36:50.128162   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.128173   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:50.128180   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:50.128238   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:50.160999   79869 cri.go:89] found id: ""
	I0829 19:36:50.161021   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.161030   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:50.161035   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:50.161081   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:50.195246   79869 cri.go:89] found id: ""
	I0829 19:36:50.195268   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.195276   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:50.195282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:50.195329   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:50.229232   79869 cri.go:89] found id: ""
	I0829 19:36:50.229263   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.229273   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:50.229280   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:50.229340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:50.265141   79869 cri.go:89] found id: ""
	I0829 19:36:50.265169   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.265180   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:50.265188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:50.265251   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:50.299896   79869 cri.go:89] found id: ""
	I0829 19:36:50.299928   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.299940   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:50.299949   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:50.299963   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:50.313408   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:50.313431   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:50.382019   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:50.382037   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:50.382049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:50.462174   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:50.462211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:50.499944   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:50.499971   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.050299   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:53.064866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:53.064963   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:53.098468   79869 cri.go:89] found id: ""
	I0829 19:36:53.098492   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.098500   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:53.098506   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:53.098555   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:53.130323   79869 cri.go:89] found id: ""
	I0829 19:36:53.130354   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.130377   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:53.130385   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:53.130445   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:53.175911   79869 cri.go:89] found id: ""
	I0829 19:36:53.175941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.175951   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:53.175968   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:53.176033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:53.209834   79869 cri.go:89] found id: ""
	I0829 19:36:53.209865   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.209874   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:53.209881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:53.209959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:53.246277   79869 cri.go:89] found id: ""
	I0829 19:36:53.246322   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.246332   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:53.246340   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:53.246401   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:53.283911   79869 cri.go:89] found id: ""
	I0829 19:36:53.283941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.283953   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:53.283962   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:53.284024   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:53.315217   79869 cri.go:89] found id: ""
	I0829 19:36:53.315247   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.315257   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:53.315265   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:53.315328   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:53.348341   79869 cri.go:89] found id: ""
	I0829 19:36:53.348392   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.348405   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:53.348417   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:53.348436   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.399841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:53.399879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:53.414453   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:53.414491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:53.490003   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:53.490023   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:53.490042   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:53.565162   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:53.565198   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:49.669984   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:52.168756   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.736642   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:53.236282   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:55.430710   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:57.430791   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:56.106051   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:56.119263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:56.119345   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:56.160104   79869 cri.go:89] found id: ""
	I0829 19:36:56.160131   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.160138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:56.160144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:56.160192   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:56.196028   79869 cri.go:89] found id: ""
	I0829 19:36:56.196054   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.196062   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:56.196067   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:56.196113   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:56.229503   79869 cri.go:89] found id: ""
	I0829 19:36:56.229532   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.229539   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:56.229553   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:56.229602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:56.263904   79869 cri.go:89] found id: ""
	I0829 19:36:56.263934   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.263944   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:56.263951   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:56.264013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:56.295579   79869 cri.go:89] found id: ""
	I0829 19:36:56.295607   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.295618   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:56.295625   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:56.295680   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:56.328514   79869 cri.go:89] found id: ""
	I0829 19:36:56.328548   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.328556   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:56.328563   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:56.328620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:56.361388   79869 cri.go:89] found id: ""
	I0829 19:36:56.361418   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.361426   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:56.361431   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:56.361508   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:56.393312   79869 cri.go:89] found id: ""
	I0829 19:36:56.393345   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.393354   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:56.393362   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:56.393372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:56.446431   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:56.446472   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:56.459086   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:56.459112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:56.525526   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:56.525554   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:56.525569   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:56.609554   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:56.609592   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:54.169625   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:56.169688   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:58.170249   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:55.736101   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:58.235887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:00.236133   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:59.931992   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.430785   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:59.148291   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:59.162462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:59.162524   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:59.199732   79869 cri.go:89] found id: ""
	I0829 19:36:59.199761   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.199771   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:59.199780   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:59.199861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:59.232285   79869 cri.go:89] found id: ""
	I0829 19:36:59.232324   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.232335   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:59.232345   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:59.232415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:59.266424   79869 cri.go:89] found id: ""
	I0829 19:36:59.266452   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.266463   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:59.266471   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:59.266536   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:59.306707   79869 cri.go:89] found id: ""
	I0829 19:36:59.306733   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.306742   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:59.306748   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:59.306807   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:59.345114   79869 cri.go:89] found id: ""
	I0829 19:36:59.345144   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.345154   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:59.345162   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:59.345225   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:59.382940   79869 cri.go:89] found id: ""
	I0829 19:36:59.382963   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.382971   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:59.382977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:59.383031   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:59.420066   79869 cri.go:89] found id: ""
	I0829 19:36:59.420088   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.420095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:59.420101   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:59.420146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:59.457355   79869 cri.go:89] found id: ""
	I0829 19:36:59.457377   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.457385   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:59.457392   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:59.457409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:59.528868   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:59.528893   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:59.528908   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:59.612849   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:59.612886   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:59.649036   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:59.649064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:59.703071   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:59.703105   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.216020   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:02.229270   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:02.229351   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:02.266857   79869 cri.go:89] found id: ""
	I0829 19:37:02.266885   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.266897   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:02.266904   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:02.266967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:02.304473   79869 cri.go:89] found id: ""
	I0829 19:37:02.304501   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.304512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:02.304520   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:02.304590   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:02.338219   79869 cri.go:89] found id: ""
	I0829 19:37:02.338244   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.338253   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:02.338261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:02.338323   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:02.370974   79869 cri.go:89] found id: ""
	I0829 19:37:02.371006   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.371017   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:02.371025   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:02.371084   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:02.405871   79869 cri.go:89] found id: ""
	I0829 19:37:02.405895   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.405902   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:02.405908   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:02.405955   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:02.438516   79869 cri.go:89] found id: ""
	I0829 19:37:02.438543   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.438554   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:02.438568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:02.438630   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:02.471180   79869 cri.go:89] found id: ""
	I0829 19:37:02.471205   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.471213   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:02.471218   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:02.471276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:02.503203   79869 cri.go:89] found id: ""
	I0829 19:37:02.503227   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.503237   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:02.503248   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:02.503262   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:02.555303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:02.555337   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.567903   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:02.567927   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:02.641377   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:02.641403   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:02.641418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:02.717475   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:02.717522   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:00.669482   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.669691   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.237155   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:04.237334   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:04.431033   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:06.431419   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.431901   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:05.257326   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:05.270641   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:05.270717   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:05.303873   79869 cri.go:89] found id: ""
	I0829 19:37:05.303901   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.303909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:05.303915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:05.303959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:05.345153   79869 cri.go:89] found id: ""
	I0829 19:37:05.345176   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.345184   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:05.345189   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:05.345245   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:05.379032   79869 cri.go:89] found id: ""
	I0829 19:37:05.379059   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.379067   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:05.379073   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:05.379135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:05.412432   79869 cri.go:89] found id: ""
	I0829 19:37:05.412465   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.412476   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:05.412484   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:05.412538   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:05.445441   79869 cri.go:89] found id: ""
	I0829 19:37:05.445464   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.445471   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:05.445477   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:05.445527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:05.478921   79869 cri.go:89] found id: ""
	I0829 19:37:05.478949   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.478957   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:05.478964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:05.479011   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:05.509821   79869 cri.go:89] found id: ""
	I0829 19:37:05.509849   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.509859   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:05.509866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:05.509924   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:05.541409   79869 cri.go:89] found id: ""
	I0829 19:37:05.541435   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.541443   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:05.541451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:05.541464   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.590569   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:05.590601   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:05.604071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:05.604101   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:05.685233   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:05.685262   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:05.685277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:05.761082   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:05.761112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.299816   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:08.312964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:08.313037   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:08.344710   79869 cri.go:89] found id: ""
	I0829 19:37:08.344737   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.344745   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:08.344755   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:08.344820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:08.378185   79869 cri.go:89] found id: ""
	I0829 19:37:08.378210   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.378217   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:08.378223   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:08.378272   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:08.410619   79869 cri.go:89] found id: ""
	I0829 19:37:08.410645   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.410663   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:08.410670   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:08.410729   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:08.445494   79869 cri.go:89] found id: ""
	I0829 19:37:08.445522   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.445531   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:08.445540   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:08.445601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:08.478225   79869 cri.go:89] found id: ""
	I0829 19:37:08.478249   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.478258   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:08.478263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:08.478311   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:08.512006   79869 cri.go:89] found id: ""
	I0829 19:37:08.512032   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.512042   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:08.512049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:08.512111   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:08.546800   79869 cri.go:89] found id: ""
	I0829 19:37:08.546831   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.546841   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:08.546848   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:08.546911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:08.580353   79869 cri.go:89] found id: ""
	I0829 19:37:08.580383   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.580394   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:08.580405   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:08.580418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:08.661004   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:08.661041   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.708548   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:08.708581   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.168832   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:07.669695   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:06.736029   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.736415   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:10.930895   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:13.430209   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.761385   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:08.761418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:08.774365   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:08.774392   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:08.839864   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.340781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:11.353417   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:11.353492   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:11.388836   79869 cri.go:89] found id: ""
	I0829 19:37:11.388864   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.388873   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:11.388879   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:11.388925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:11.429655   79869 cri.go:89] found id: ""
	I0829 19:37:11.429685   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.429695   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:11.429703   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:11.429761   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:11.462122   79869 cri.go:89] found id: ""
	I0829 19:37:11.462157   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.462166   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:11.462174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:11.462236   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:11.495955   79869 cri.go:89] found id: ""
	I0829 19:37:11.495985   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.495996   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:11.496003   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:11.496063   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:11.529394   79869 cri.go:89] found id: ""
	I0829 19:37:11.529427   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.529438   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:11.529446   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:11.529513   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:11.565804   79869 cri.go:89] found id: ""
	I0829 19:37:11.565830   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.565838   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:11.565844   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:11.565903   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:11.601786   79869 cri.go:89] found id: ""
	I0829 19:37:11.601815   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.601825   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:11.601832   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:11.601889   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:11.638213   79869 cri.go:89] found id: ""
	I0829 19:37:11.638234   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.638242   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:11.638250   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:11.638260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:11.651085   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:11.651113   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:11.716834   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.716858   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:11.716872   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:11.804266   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:11.804310   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:11.846655   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:11.846684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:10.168947   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:12.669439   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:11.236100   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:13.236138   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:15.930954   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.931355   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:14.408512   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:14.420973   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:14.421033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:14.456516   79869 cri.go:89] found id: ""
	I0829 19:37:14.456540   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.456548   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:14.456553   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:14.456604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:14.489480   79869 cri.go:89] found id: ""
	I0829 19:37:14.489502   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.489512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:14.489517   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:14.489562   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:14.521821   79869 cri.go:89] found id: ""
	I0829 19:37:14.521849   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.521857   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:14.521863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:14.521911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:14.557084   79869 cri.go:89] found id: ""
	I0829 19:37:14.557116   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.557125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:14.557131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:14.557180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:14.590979   79869 cri.go:89] found id: ""
	I0829 19:37:14.591009   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.591019   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:14.591027   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:14.591088   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:14.624022   79869 cri.go:89] found id: ""
	I0829 19:37:14.624047   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.624057   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:14.624066   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:14.624131   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:14.656100   79869 cri.go:89] found id: ""
	I0829 19:37:14.656133   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.656145   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:14.656153   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:14.656214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:14.694241   79869 cri.go:89] found id: ""
	I0829 19:37:14.694276   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.694289   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:14.694302   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:14.694317   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.748276   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:14.748312   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:14.761340   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:14.761361   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:14.834815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:14.834842   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:14.834857   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:14.909857   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:14.909898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.453264   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:17.466704   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:17.466776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:17.500163   79869 cri.go:89] found id: ""
	I0829 19:37:17.500193   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.500205   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:17.500212   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:17.500269   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:17.532155   79869 cri.go:89] found id: ""
	I0829 19:37:17.532182   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.532192   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:17.532200   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:17.532259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:17.564710   79869 cri.go:89] found id: ""
	I0829 19:37:17.564737   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.564747   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:17.564754   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:17.564816   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:17.597056   79869 cri.go:89] found id: ""
	I0829 19:37:17.597091   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.597103   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:17.597111   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:17.597173   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:17.633398   79869 cri.go:89] found id: ""
	I0829 19:37:17.633424   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.633434   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:17.633442   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:17.633506   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:17.666201   79869 cri.go:89] found id: ""
	I0829 19:37:17.666243   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.666254   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:17.666262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:17.666324   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:17.700235   79869 cri.go:89] found id: ""
	I0829 19:37:17.700259   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.700266   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:17.700273   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:17.700320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:17.732060   79869 cri.go:89] found id: ""
	I0829 19:37:17.732090   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.732100   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:17.732110   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:17.732126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:17.747071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:17.747107   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:17.816644   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:17.816665   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:17.816677   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:17.895084   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:17.895134   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.935093   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:17.935125   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.669895   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.170115   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:15.736101   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.736304   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:19.736492   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:20.429878   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:22.430233   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:20.484693   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:20.497977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:20.498043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:20.531361   79869 cri.go:89] found id: ""
	I0829 19:37:20.531389   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.531400   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:20.531408   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:20.531469   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:20.569556   79869 cri.go:89] found id: ""
	I0829 19:37:20.569583   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.569594   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:20.569603   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:20.569668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:20.602350   79869 cri.go:89] found id: ""
	I0829 19:37:20.602377   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.602385   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:20.602391   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:20.602448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:20.637274   79869 cri.go:89] found id: ""
	I0829 19:37:20.637305   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.637319   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:20.637327   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:20.637388   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:20.686169   79869 cri.go:89] found id: ""
	I0829 19:37:20.686196   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.686204   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:20.686210   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:20.686257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:20.722745   79869 cri.go:89] found id: ""
	I0829 19:37:20.722775   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.722786   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:20.722794   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:20.722856   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:20.757314   79869 cri.go:89] found id: ""
	I0829 19:37:20.757337   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.757344   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:20.757349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:20.757398   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:20.790802   79869 cri.go:89] found id: ""
	I0829 19:37:20.790834   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.790844   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:20.790855   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:20.790870   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:20.840866   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:20.840898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:20.854053   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:20.854098   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:20.921717   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:20.921746   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:20.921761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:21.003362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:21.003398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:23.541356   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:23.554621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:23.554699   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:23.588155   79869 cri.go:89] found id: ""
	I0829 19:37:23.588190   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.588199   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:23.588207   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:23.588273   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:23.622917   79869 cri.go:89] found id: ""
	I0829 19:37:23.622945   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.622954   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:23.622960   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:23.623016   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:23.658615   79869 cri.go:89] found id: ""
	I0829 19:37:23.658648   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.658657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:23.658663   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:23.658720   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:23.693196   79869 cri.go:89] found id: ""
	I0829 19:37:23.693224   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.693234   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:23.693242   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:23.693309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:23.728285   79869 cri.go:89] found id: ""
	I0829 19:37:23.728317   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.728328   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:23.728336   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:23.728399   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:19.668651   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:21.669949   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:23.670402   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:22.235749   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:24.236078   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:24.431492   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:26.930440   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:23.763713   79869 cri.go:89] found id: ""
	I0829 19:37:23.763741   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.763751   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:23.763759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:23.763812   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:23.797776   79869 cri.go:89] found id: ""
	I0829 19:37:23.797801   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.797809   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:23.797814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:23.797863   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:23.832108   79869 cri.go:89] found id: ""
	I0829 19:37:23.832139   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.832151   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:23.832161   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:23.832175   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:23.880460   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:23.880490   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:23.893251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:23.893280   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:23.962079   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:23.962127   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:23.962140   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:24.048048   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:24.048088   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:26.593169   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:26.606349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:26.606426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:26.643119   79869 cri.go:89] found id: ""
	I0829 19:37:26.643143   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.643155   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:26.643161   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:26.643216   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:26.681555   79869 cri.go:89] found id: ""
	I0829 19:37:26.681579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.681591   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:26.681597   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:26.681655   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:26.718440   79869 cri.go:89] found id: ""
	I0829 19:37:26.718469   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.718479   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:26.718486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:26.718549   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:26.755249   79869 cri.go:89] found id: ""
	I0829 19:37:26.755274   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.755284   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:26.755292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:26.755356   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:26.790554   79869 cri.go:89] found id: ""
	I0829 19:37:26.790579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.790590   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:26.790597   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:26.790665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:26.826492   79869 cri.go:89] found id: ""
	I0829 19:37:26.826521   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.826530   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:26.826537   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:26.826600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:26.863456   79869 cri.go:89] found id: ""
	I0829 19:37:26.863487   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.863499   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:26.863508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:26.863579   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:26.897637   79869 cri.go:89] found id: ""
	I0829 19:37:26.897670   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.897683   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:26.897694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:26.897709   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:26.978362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:26.978400   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:27.016212   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:27.016245   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:27.078350   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:27.078386   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:27.101701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:27.101744   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:27.186720   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:26.168605   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:28.170938   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:26.735518   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:28.737503   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:29.431222   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:31.931202   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:29.686902   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:29.699814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:29.699885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:29.733867   79869 cri.go:89] found id: ""
	I0829 19:37:29.733893   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.733904   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:29.733911   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:29.733970   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:29.767910   79869 cri.go:89] found id: ""
	I0829 19:37:29.767937   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.767946   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:29.767952   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:29.767998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:29.801085   79869 cri.go:89] found id: ""
	I0829 19:37:29.801109   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.801117   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:29.801122   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:29.801166   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:29.834215   79869 cri.go:89] found id: ""
	I0829 19:37:29.834238   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.834246   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:29.834251   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:29.834307   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:29.872761   79869 cri.go:89] found id: ""
	I0829 19:37:29.872785   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.872793   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:29.872803   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:29.872847   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:29.909354   79869 cri.go:89] found id: ""
	I0829 19:37:29.909385   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.909395   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:29.909408   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:29.909468   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:29.941359   79869 cri.go:89] found id: ""
	I0829 19:37:29.941383   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.941390   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:29.941396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:29.941451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:29.973694   79869 cri.go:89] found id: ""
	I0829 19:37:29.973726   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.973736   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:29.973746   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:29.973761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:30.024863   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:30.024896   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.039092   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:30.039119   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:30.106106   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:30.106128   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:30.106143   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:30.183254   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:30.183289   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:32.722665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:32.736188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:32.736261   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:32.773039   79869 cri.go:89] found id: ""
	I0829 19:37:32.773065   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.773073   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:32.773082   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:32.773144   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:32.818204   79869 cri.go:89] found id: ""
	I0829 19:37:32.818234   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.818245   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:32.818252   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:32.818313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:32.862902   79869 cri.go:89] found id: ""
	I0829 19:37:32.862932   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.862942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:32.862949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:32.863009   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:32.908338   79869 cri.go:89] found id: ""
	I0829 19:37:32.908369   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.908380   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:32.908388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:32.908452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:32.941717   79869 cri.go:89] found id: ""
	I0829 19:37:32.941746   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.941757   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:32.941765   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:32.941827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:32.975777   79869 cri.go:89] found id: ""
	I0829 19:37:32.975806   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.975818   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:32.975827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:32.975885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:33.007518   79869 cri.go:89] found id: ""
	I0829 19:37:33.007551   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.007563   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:33.007570   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:33.007638   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:33.039902   79869 cri.go:89] found id: ""
	I0829 19:37:33.039924   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.039931   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:33.039946   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:33.039958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:33.111691   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:33.111720   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:33.111734   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:33.191036   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:33.191067   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:33.228850   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:33.228882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:33.282314   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:33.282351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.668490   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:32.669630   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:31.235788   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:33.735661   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:33.931996   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.932964   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:38.429817   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.796597   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:35.809357   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:35.809437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:35.841747   79869 cri.go:89] found id: ""
	I0829 19:37:35.841774   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.841783   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:35.841792   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:35.841850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:35.875614   79869 cri.go:89] found id: ""
	I0829 19:37:35.875639   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.875650   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:35.875657   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:35.875718   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:35.910547   79869 cri.go:89] found id: ""
	I0829 19:37:35.910571   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.910579   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:35.910585   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:35.910647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:35.949505   79869 cri.go:89] found id: ""
	I0829 19:37:35.949526   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.949533   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:35.949538   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:35.949583   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:35.984331   79869 cri.go:89] found id: ""
	I0829 19:37:35.984369   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.984381   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:35.984388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:35.984451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:36.018870   79869 cri.go:89] found id: ""
	I0829 19:37:36.018897   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.018909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:36.018917   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:36.018976   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:36.053557   79869 cri.go:89] found id: ""
	I0829 19:37:36.053593   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.053603   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:36.053611   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:36.053668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:36.087217   79869 cri.go:89] found id: ""
	I0829 19:37:36.087243   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.087254   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:36.087264   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:36.087282   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:36.141546   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:36.141577   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:36.155496   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:36.155524   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:36.225014   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:36.225038   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:36.225052   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:36.304399   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:36.304442   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:35.168843   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:37.169415   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.736103   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:37.736554   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:40.235995   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:40.430698   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.430836   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:38.842368   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:38.856085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:38.856160   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:38.893989   79869 cri.go:89] found id: ""
	I0829 19:37:38.894016   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.894024   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:38.894030   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:38.894075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:38.926756   79869 cri.go:89] found id: ""
	I0829 19:37:38.926784   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.926792   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:38.926798   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:38.926859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:38.966346   79869 cri.go:89] found id: ""
	I0829 19:37:38.966370   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.966379   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:38.966385   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:38.966442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:39.000266   79869 cri.go:89] found id: ""
	I0829 19:37:39.000291   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.000298   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:39.000307   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:39.000355   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:39.037243   79869 cri.go:89] found id: ""
	I0829 19:37:39.037269   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.037277   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:39.037282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:39.037347   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:39.068823   79869 cri.go:89] found id: ""
	I0829 19:37:39.068852   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.068864   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:39.068872   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:39.068936   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:39.099649   79869 cri.go:89] found id: ""
	I0829 19:37:39.099674   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.099682   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:39.099689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:39.099748   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:39.131764   79869 cri.go:89] found id: ""
	I0829 19:37:39.131786   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.131794   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:39.131802   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:39.131814   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:39.188087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:39.188123   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:39.200989   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:39.201015   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:39.279230   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:39.279257   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:39.279271   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:39.358667   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:39.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:41.897833   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:41.911145   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:41.911219   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:41.947096   79869 cri.go:89] found id: ""
	I0829 19:37:41.947122   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.947133   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:41.947141   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:41.947203   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:41.984267   79869 cri.go:89] found id: ""
	I0829 19:37:41.984301   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.984309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:41.984315   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:41.984384   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:42.018170   79869 cri.go:89] found id: ""
	I0829 19:37:42.018198   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.018209   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:42.018217   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:42.018281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:42.058245   79869 cri.go:89] found id: ""
	I0829 19:37:42.058269   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.058278   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:42.058283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:42.058327   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:42.093182   79869 cri.go:89] found id: ""
	I0829 19:37:42.093214   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.093226   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:42.093233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:42.093299   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:42.126013   79869 cri.go:89] found id: ""
	I0829 19:37:42.126041   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.126050   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:42.126058   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:42.126136   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:42.166568   79869 cri.go:89] found id: ""
	I0829 19:37:42.166660   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.166675   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:42.166683   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:42.166763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:42.204904   79869 cri.go:89] found id: ""
	I0829 19:37:42.204930   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.204938   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:42.204947   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:42.204960   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:42.262487   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:42.262533   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:42.275703   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:42.275730   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:42.341375   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:42.341394   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:42.341408   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:42.420981   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:42.421021   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:39.670059   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.169724   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.237785   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.736417   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.929743   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:46.930603   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.965267   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:44.979151   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:44.979204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:45.020423   79869 cri.go:89] found id: ""
	I0829 19:37:45.020448   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.020456   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:45.020461   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:45.020521   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:45.058200   79869 cri.go:89] found id: ""
	I0829 19:37:45.058225   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.058233   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:45.058238   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:45.058286   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:45.093886   79869 cri.go:89] found id: ""
	I0829 19:37:45.093909   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.093917   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:45.093923   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:45.093968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:45.127630   79869 cri.go:89] found id: ""
	I0829 19:37:45.127663   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.127674   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:45.127681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:45.127742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:45.160643   79869 cri.go:89] found id: ""
	I0829 19:37:45.160669   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.160679   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:45.160685   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:45.160742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:45.196010   79869 cri.go:89] found id: ""
	I0829 19:37:45.196035   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.196043   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:45.196050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:45.196101   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:45.229297   79869 cri.go:89] found id: ""
	I0829 19:37:45.229375   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.229395   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:45.229405   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:45.229461   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:45.267244   79869 cri.go:89] found id: ""
	I0829 19:37:45.267271   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.267281   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:45.267292   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:45.267306   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:45.280179   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:45.280201   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:45.352318   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:45.352339   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:45.352351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:45.432702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:45.432732   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:45.470540   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:45.470564   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.019771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:48.032745   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:48.032819   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:48.066895   79869 cri.go:89] found id: ""
	I0829 19:37:48.066921   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.066930   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:48.066938   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:48.066998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:48.104824   79869 cri.go:89] found id: ""
	I0829 19:37:48.104853   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.104861   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:48.104866   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:48.104931   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:48.140964   79869 cri.go:89] found id: ""
	I0829 19:37:48.140990   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.140998   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:48.141004   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:48.141051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:48.174550   79869 cri.go:89] found id: ""
	I0829 19:37:48.174578   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.174587   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:48.174593   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:48.174647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:48.207397   79869 cri.go:89] found id: ""
	I0829 19:37:48.207422   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.207430   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:48.207437   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:48.207495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:48.240948   79869 cri.go:89] found id: ""
	I0829 19:37:48.240970   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.240978   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:48.240983   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:48.241027   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:48.281058   79869 cri.go:89] found id: ""
	I0829 19:37:48.281087   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.281095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:48.281100   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:48.281151   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:48.315511   79869 cri.go:89] found id: ""
	I0829 19:37:48.315541   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.315552   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:48.315564   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:48.315580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.367680   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:48.367714   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:48.380251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:48.380285   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:48.449432   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:48.449452   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:48.449467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:48.525529   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:48.525563   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:44.669068   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:47.169440   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:46.737461   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:49.236079   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:49.431026   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.931134   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.064580   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:51.077351   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:51.077430   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:51.110018   79869 cri.go:89] found id: ""
	I0829 19:37:51.110049   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.110058   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:51.110063   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:51.110138   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:51.143667   79869 cri.go:89] found id: ""
	I0829 19:37:51.143700   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.143711   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:51.143719   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:51.143791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:51.178193   79869 cri.go:89] found id: ""
	I0829 19:37:51.178221   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.178229   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:51.178235   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:51.178285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:51.212323   79869 cri.go:89] found id: ""
	I0829 19:37:51.212352   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.212359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:51.212366   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:51.212413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:51.245724   79869 cri.go:89] found id: ""
	I0829 19:37:51.245745   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.245752   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:51.245758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:51.245832   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:51.278424   79869 cri.go:89] found id: ""
	I0829 19:37:51.278448   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.278456   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:51.278462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:51.278509   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:51.309469   79869 cri.go:89] found id: ""
	I0829 19:37:51.309498   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.309508   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:51.309516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:51.309602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:51.342596   79869 cri.go:89] found id: ""
	I0829 19:37:51.342625   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.342639   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:51.342650   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:51.342664   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:51.394045   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:51.394083   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:51.407902   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:51.407934   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:51.480759   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:51.480782   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:51.480797   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:51.565533   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:51.565570   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:49.671574   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:52.168702   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.237371   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:53.736122   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:54.430278   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.431024   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:54.107142   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:54.121083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:54.121141   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:54.156019   79869 cri.go:89] found id: ""
	I0829 19:37:54.156042   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.156050   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:54.156056   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:54.156106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:54.188748   79869 cri.go:89] found id: ""
	I0829 19:37:54.188772   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.188783   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:54.188790   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:54.188851   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:54.222044   79869 cri.go:89] found id: ""
	I0829 19:37:54.222079   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.222112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:54.222132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:54.222214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:54.254710   79869 cri.go:89] found id: ""
	I0829 19:37:54.254740   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.254750   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:54.254759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:54.254820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:54.292053   79869 cri.go:89] found id: ""
	I0829 19:37:54.292078   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.292086   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:54.292092   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:54.292153   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:54.330528   79869 cri.go:89] found id: ""
	I0829 19:37:54.330561   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.330573   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:54.330580   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:54.330653   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:54.363571   79869 cri.go:89] found id: ""
	I0829 19:37:54.363594   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.363602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:54.363608   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:54.363669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:54.395112   79869 cri.go:89] found id: ""
	I0829 19:37:54.395144   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.395166   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:54.395178   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:54.395192   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:54.408701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:54.408729   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:54.474198   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:54.474218   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:54.474231   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:54.555430   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:54.555467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.592858   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:54.592893   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.144165   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:57.157368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:57.157437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:57.194662   79869 cri.go:89] found id: ""
	I0829 19:37:57.194693   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.194706   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:57.194721   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:57.194784   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:57.226822   79869 cri.go:89] found id: ""
	I0829 19:37:57.226848   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.226856   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:57.226862   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:57.226910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:57.263892   79869 cri.go:89] found id: ""
	I0829 19:37:57.263932   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.263945   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:57.263955   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:57.264018   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:57.301202   79869 cri.go:89] found id: ""
	I0829 19:37:57.301243   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.301255   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:57.301261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:57.301317   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:57.335291   79869 cri.go:89] found id: ""
	I0829 19:37:57.335321   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.335337   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:57.335343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:57.335392   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:57.368961   79869 cri.go:89] found id: ""
	I0829 19:37:57.368983   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.368992   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:57.368997   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:57.369042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:57.401813   79869 cri.go:89] found id: ""
	I0829 19:37:57.401837   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.401844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:57.401850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:57.401906   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:57.434719   79869 cri.go:89] found id: ""
	I0829 19:37:57.434745   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.434756   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:57.434765   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:57.434777   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.484182   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:57.484217   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:57.497025   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:57.497051   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:57.569752   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:57.569776   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:57.569789   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:57.651276   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:57.651324   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.169824   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.668831   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.236564   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:58.736176   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:58.930996   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.931806   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.430980   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.189981   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:00.204723   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:00.204794   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:00.241677   79869 cri.go:89] found id: ""
	I0829 19:38:00.241700   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.241707   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:00.241713   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:00.241768   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:00.278692   79869 cri.go:89] found id: ""
	I0829 19:38:00.278726   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.278736   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:00.278744   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:00.278801   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:00.310418   79869 cri.go:89] found id: ""
	I0829 19:38:00.310448   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.310459   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:00.310466   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:00.310528   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:00.348423   79869 cri.go:89] found id: ""
	I0829 19:38:00.348446   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.348453   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:00.348459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:00.348511   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:00.380954   79869 cri.go:89] found id: ""
	I0829 19:38:00.380978   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.380985   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:00.380991   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:00.381043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:00.414783   79869 cri.go:89] found id: ""
	I0829 19:38:00.414812   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.414823   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:00.414831   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:00.414895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:00.450606   79869 cri.go:89] found id: ""
	I0829 19:38:00.450634   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.450642   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:00.450647   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:00.450696   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:00.485337   79869 cri.go:89] found id: ""
	I0829 19:38:00.485360   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.485375   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:00.485382   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:00.485399   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:00.551481   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:00.551502   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:00.551513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:00.630781   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:00.630819   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:00.676339   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:00.676363   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:00.728420   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:00.728452   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.243268   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:03.256259   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:03.256359   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:03.291103   79869 cri.go:89] found id: ""
	I0829 19:38:03.291131   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.291138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:03.291144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:03.291190   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:03.327866   79869 cri.go:89] found id: ""
	I0829 19:38:03.327898   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.327909   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:03.327917   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:03.327986   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:03.359082   79869 cri.go:89] found id: ""
	I0829 19:38:03.359110   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.359121   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:03.359129   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:03.359183   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:03.392714   79869 cri.go:89] found id: ""
	I0829 19:38:03.392741   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.392751   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:03.392758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:03.392823   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:03.427785   79869 cri.go:89] found id: ""
	I0829 19:38:03.427812   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.427820   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:03.427827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:03.427888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:03.463136   79869 cri.go:89] found id: ""
	I0829 19:38:03.463161   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.463171   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:03.463177   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:03.463230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:03.496188   79869 cri.go:89] found id: ""
	I0829 19:38:03.496225   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.496237   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:03.496244   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:03.496295   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:03.529566   79869 cri.go:89] found id: ""
	I0829 19:38:03.529591   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.529600   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:03.529609   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:03.529619   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:03.584787   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:03.584828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.599464   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:03.599509   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:03.676743   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:03.676763   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:03.676774   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:59.169059   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:01.668656   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.669716   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.736901   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.236263   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:05.431293   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:07.930953   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.757552   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:03.757605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.297887   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:06.311413   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:06.311498   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:06.345494   79869 cri.go:89] found id: ""
	I0829 19:38:06.345529   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.345539   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:06.345546   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:06.345605   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:06.377646   79869 cri.go:89] found id: ""
	I0829 19:38:06.377680   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.377691   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:06.377698   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:06.377809   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:06.416770   79869 cri.go:89] found id: ""
	I0829 19:38:06.416799   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.416810   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:06.416817   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:06.416869   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:06.451995   79869 cri.go:89] found id: ""
	I0829 19:38:06.452024   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.452034   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:06.452040   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:06.452095   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:06.484604   79869 cri.go:89] found id: ""
	I0829 19:38:06.484631   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.484642   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:06.484650   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:06.484713   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:06.517955   79869 cri.go:89] found id: ""
	I0829 19:38:06.517981   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.517988   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:06.517994   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:06.518053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:06.551069   79869 cri.go:89] found id: ""
	I0829 19:38:06.551100   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.551111   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:06.551118   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:06.551178   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:06.585340   79869 cri.go:89] found id: ""
	I0829 19:38:06.585367   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.585379   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:06.585389   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:06.585416   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:06.637942   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:06.637977   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:06.652097   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:06.652124   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:06.738226   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:06.738252   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:06.738268   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:06.817478   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:06.817519   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.168530   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:08.169657   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:05.736429   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:08.236731   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:09.931677   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.431484   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:09.360441   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:09.373372   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:09.373431   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:09.409942   79869 cri.go:89] found id: ""
	I0829 19:38:09.409970   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.409981   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:09.409989   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:09.410050   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:09.444611   79869 cri.go:89] found id: ""
	I0829 19:38:09.444639   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.444647   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:09.444652   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:09.444701   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:09.478206   79869 cri.go:89] found id: ""
	I0829 19:38:09.478233   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.478240   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:09.478246   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:09.478305   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:09.510313   79869 cri.go:89] found id: ""
	I0829 19:38:09.510340   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.510356   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:09.510361   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:09.510419   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:09.545380   79869 cri.go:89] found id: ""
	I0829 19:38:09.545412   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.545422   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:09.545429   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:09.545495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:09.578560   79869 cri.go:89] found id: ""
	I0829 19:38:09.578591   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.578600   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:09.578606   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:09.578659   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:09.613445   79869 cri.go:89] found id: ""
	I0829 19:38:09.613476   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.613484   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:09.613490   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:09.613540   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:09.649933   79869 cri.go:89] found id: ""
	I0829 19:38:09.649961   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.649970   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:09.649981   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:09.649998   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:09.662471   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:09.662496   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:09.728562   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:09.728594   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:09.728610   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:09.813152   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:09.813187   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:09.852846   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:09.852879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.403437   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:12.429787   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:12.429872   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:12.470833   79869 cri.go:89] found id: ""
	I0829 19:38:12.470858   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.470866   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:12.470871   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:12.470947   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:12.502307   79869 cri.go:89] found id: ""
	I0829 19:38:12.502334   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.502343   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:12.502351   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:12.502411   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:12.535084   79869 cri.go:89] found id: ""
	I0829 19:38:12.535108   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.535114   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:12.535120   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:12.535182   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:12.571735   79869 cri.go:89] found id: ""
	I0829 19:38:12.571762   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.571772   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:12.571779   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:12.571838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:12.604987   79869 cri.go:89] found id: ""
	I0829 19:38:12.605020   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.605029   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:12.605036   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:12.605093   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:12.639257   79869 cri.go:89] found id: ""
	I0829 19:38:12.639281   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.639289   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:12.639300   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:12.639362   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:12.674790   79869 cri.go:89] found id: ""
	I0829 19:38:12.674811   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.674818   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:12.674824   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:12.674877   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:12.711132   79869 cri.go:89] found id: ""
	I0829 19:38:12.711156   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.711164   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:12.711172   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:12.711184   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.763916   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:12.763950   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:12.777071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:12.777100   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:12.844974   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:12.845002   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:12.845017   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:12.924646   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:12.924682   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:10.668769   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.669771   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:10.736651   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.737433   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:15.236521   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:14.930832   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:16.931496   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:15.465319   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:15.478237   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:15.478315   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:15.510066   79869 cri.go:89] found id: ""
	I0829 19:38:15.510113   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.510124   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:15.510132   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:15.510180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:15.543094   79869 cri.go:89] found id: ""
	I0829 19:38:15.543117   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.543125   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:15.543138   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:15.543189   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:15.577253   79869 cri.go:89] found id: ""
	I0829 19:38:15.577279   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.577286   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:15.577292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:15.577352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:15.612073   79869 cri.go:89] found id: ""
	I0829 19:38:15.612107   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.612119   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:15.612128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:15.612196   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:15.645565   79869 cri.go:89] found id: ""
	I0829 19:38:15.645587   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.645595   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:15.645601   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:15.645646   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:15.679991   79869 cri.go:89] found id: ""
	I0829 19:38:15.680018   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.680027   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:15.680033   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:15.680109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:15.713899   79869 cri.go:89] found id: ""
	I0829 19:38:15.713923   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.713931   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:15.713937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:15.713991   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:15.750559   79869 cri.go:89] found id: ""
	I0829 19:38:15.750590   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.750601   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:15.750613   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:15.750628   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:15.762918   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:15.762943   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:15.832171   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:15.832195   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:15.832211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:15.913268   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:15.913311   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:15.951909   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:15.951935   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:18.501587   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:18.514136   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:18.514198   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:18.546937   79869 cri.go:89] found id: ""
	I0829 19:38:18.546977   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.546986   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:18.546994   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:18.547059   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:18.579227   79869 cri.go:89] found id: ""
	I0829 19:38:18.579256   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.579267   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:18.579275   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:18.579350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:18.610639   79869 cri.go:89] found id: ""
	I0829 19:38:18.610665   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.610673   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:18.610678   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:18.610739   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:18.642646   79869 cri.go:89] found id: ""
	I0829 19:38:18.642672   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.642680   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:18.642689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:18.642744   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:18.678244   79869 cri.go:89] found id: ""
	I0829 19:38:18.678264   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.678271   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:18.678277   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:18.678341   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:18.709787   79869 cri.go:89] found id: ""
	I0829 19:38:18.709812   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.709820   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:18.709826   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:18.709876   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:14.669989   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:17.169402   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:17.736005   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:20.236887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:19.430240   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:21.930946   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:18.743570   79869 cri.go:89] found id: ""
	I0829 19:38:18.743593   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.743602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:18.743610   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:18.743671   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:18.776790   79869 cri.go:89] found id: ""
	I0829 19:38:18.776815   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.776823   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:18.776831   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:18.776842   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:18.791736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:18.791765   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:18.880815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:18.880835   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:18.880849   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:18.969263   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:18.969304   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:19.005813   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:19.005843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.559810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:21.572617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:21.572682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:21.606221   79869 cri.go:89] found id: ""
	I0829 19:38:21.606245   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.606253   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:21.606259   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:21.606310   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:21.637794   79869 cri.go:89] found id: ""
	I0829 19:38:21.637822   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.637830   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:21.637835   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:21.637888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:21.671484   79869 cri.go:89] found id: ""
	I0829 19:38:21.671505   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.671515   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:21.671521   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:21.671576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:21.707212   79869 cri.go:89] found id: ""
	I0829 19:38:21.707240   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.707250   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:21.707257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:21.707320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:21.742944   79869 cri.go:89] found id: ""
	I0829 19:38:21.742964   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.742971   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:21.742977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:21.743023   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:21.779919   79869 cri.go:89] found id: ""
	I0829 19:38:21.779940   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.779947   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:21.779952   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:21.780007   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:21.819817   79869 cri.go:89] found id: ""
	I0829 19:38:21.819848   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.819858   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:21.819866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:21.819926   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:21.853791   79869 cri.go:89] found id: ""
	I0829 19:38:21.853817   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.853825   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:21.853833   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:21.853843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:21.890519   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:21.890550   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.943940   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:21.943972   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:21.956697   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:21.956724   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:22.030470   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:22.030495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:22.030513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:19.170077   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:21.670142   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:23.672076   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:22.237387   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:24.737069   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:23.934621   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:26.430632   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:24.608719   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:24.624343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:24.624403   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:24.679480   79869 cri.go:89] found id: ""
	I0829 19:38:24.679507   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.679514   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:24.679520   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:24.679589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:24.714065   79869 cri.go:89] found id: ""
	I0829 19:38:24.714114   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.714127   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:24.714134   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:24.714194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:24.751382   79869 cri.go:89] found id: ""
	I0829 19:38:24.751408   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.751417   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:24.751422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:24.751481   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:24.783549   79869 cri.go:89] found id: ""
	I0829 19:38:24.783573   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.783580   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:24.783588   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:24.783643   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:24.815500   79869 cri.go:89] found id: ""
	I0829 19:38:24.815524   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.815532   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:24.815539   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:24.815594   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:24.848225   79869 cri.go:89] found id: ""
	I0829 19:38:24.848249   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.848258   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:24.848264   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:24.848321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:24.880473   79869 cri.go:89] found id: ""
	I0829 19:38:24.880500   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.880511   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:24.880520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:24.880587   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:24.912907   79869 cri.go:89] found id: ""
	I0829 19:38:24.912941   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.912959   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:24.912967   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:24.912996   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:24.985389   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:24.985420   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:24.985437   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:25.060555   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:25.060591   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:25.099073   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:25.099099   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:25.149434   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:25.149473   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:27.664027   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:27.677971   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:27.678042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:27.715124   79869 cri.go:89] found id: ""
	I0829 19:38:27.715166   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.715179   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:27.715188   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:27.715255   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:27.748316   79869 cri.go:89] found id: ""
	I0829 19:38:27.748348   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.748361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:27.748370   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:27.748439   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:27.782075   79869 cri.go:89] found id: ""
	I0829 19:38:27.782116   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.782128   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:27.782137   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:27.782194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:27.821517   79869 cri.go:89] found id: ""
	I0829 19:38:27.821545   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.821554   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:27.821562   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:27.821621   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:27.853619   79869 cri.go:89] found id: ""
	I0829 19:38:27.853643   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.853654   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:27.853668   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:27.853723   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:27.886790   79869 cri.go:89] found id: ""
	I0829 19:38:27.886814   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.886822   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:27.886828   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:27.886883   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:27.920756   79869 cri.go:89] found id: ""
	I0829 19:38:27.920779   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.920789   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:27.920802   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:27.920861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:27.959241   79869 cri.go:89] found id: ""
	I0829 19:38:27.959267   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.959279   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:27.959289   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:27.959302   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:27.999922   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:27.999945   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:28.050616   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:28.050655   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:28.066437   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:28.066470   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:28.137427   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:28.137451   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:28.137466   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:26.168927   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:28.169453   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:27.235855   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:29.236537   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:28.929913   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:30.930403   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:32.931280   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:30.721890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:30.736387   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:30.736462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:30.773230   79869 cri.go:89] found id: ""
	I0829 19:38:30.773290   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.773304   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:30.773315   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:30.773382   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:30.806234   79869 cri.go:89] found id: ""
	I0829 19:38:30.806261   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.806271   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:30.806279   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:30.806344   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:30.841608   79869 cri.go:89] found id: ""
	I0829 19:38:30.841650   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.841674   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:30.841684   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:30.841751   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:30.875926   79869 cri.go:89] found id: ""
	I0829 19:38:30.875952   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.875960   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:30.875966   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:30.876020   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:30.914312   79869 cri.go:89] found id: ""
	I0829 19:38:30.914334   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.914341   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:30.914347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:30.914406   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:30.948819   79869 cri.go:89] found id: ""
	I0829 19:38:30.948854   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.948865   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:30.948876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:30.948937   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:30.980573   79869 cri.go:89] found id: ""
	I0829 19:38:30.980606   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.980617   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:30.980627   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:30.980688   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:31.012024   79869 cri.go:89] found id: ""
	I0829 19:38:31.012052   79869 logs.go:276] 0 containers: []
	W0829 19:38:31.012061   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:31.012071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:31.012089   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:31.076870   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:31.076896   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:31.076907   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:31.156257   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:31.156293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:31.192883   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:31.192911   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:31.246303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:31.246342   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:30.169738   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:32.669256   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:31.736303   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:34.235284   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:35.430450   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:37.931562   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:33.760372   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:33.773924   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:33.773998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:33.810019   79869 cri.go:89] found id: ""
	I0829 19:38:33.810047   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.810057   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:33.810064   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:33.810146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:33.848706   79869 cri.go:89] found id: ""
	I0829 19:38:33.848735   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.848747   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:33.848754   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:33.848822   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:33.880689   79869 cri.go:89] found id: ""
	I0829 19:38:33.880718   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.880731   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:33.880739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:33.880803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:33.911962   79869 cri.go:89] found id: ""
	I0829 19:38:33.911990   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.912000   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:33.912008   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:33.912071   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:33.948432   79869 cri.go:89] found id: ""
	I0829 19:38:33.948457   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.948468   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:33.948474   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:33.948534   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:33.981818   79869 cri.go:89] found id: ""
	I0829 19:38:33.981851   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.981859   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:33.981866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:33.981923   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:34.022072   79869 cri.go:89] found id: ""
	I0829 19:38:34.022108   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.022118   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:34.022125   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:34.022185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:34.055881   79869 cri.go:89] found id: ""
	I0829 19:38:34.055909   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.055920   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:34.055930   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:34.055944   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:34.133046   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:34.133079   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:34.175426   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:34.175457   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:34.228789   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:34.228825   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:34.243272   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:34.243322   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:34.318761   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:36.819665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:36.832516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:36.832604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:36.866781   79869 cri.go:89] found id: ""
	I0829 19:38:36.866815   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.866826   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:36.866833   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:36.866895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:36.903289   79869 cri.go:89] found id: ""
	I0829 19:38:36.903319   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.903329   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:36.903335   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:36.903383   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:36.936691   79869 cri.go:89] found id: ""
	I0829 19:38:36.936714   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.936722   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:36.936727   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:36.936776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:36.969496   79869 cri.go:89] found id: ""
	I0829 19:38:36.969525   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.969535   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:36.969541   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:36.969589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:37.001683   79869 cri.go:89] found id: ""
	I0829 19:38:37.001707   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.001715   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:37.001720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:37.001765   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:37.041189   79869 cri.go:89] found id: ""
	I0829 19:38:37.041212   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.041223   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:37.041231   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:37.041281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:37.077041   79869 cri.go:89] found id: ""
	I0829 19:38:37.077067   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.077075   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:37.077080   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:37.077135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:37.110478   79869 cri.go:89] found id: ""
	I0829 19:38:37.110506   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.110514   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:37.110523   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:37.110535   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:37.162560   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:37.162598   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:37.176466   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:37.176491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:37.244843   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:37.244861   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:37.244874   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:37.323324   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:37.323362   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:35.169023   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:37.668411   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:36.236332   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:38.236971   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:40.237468   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:39.932147   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.430752   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:39.864755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:39.877730   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:39.877789   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:39.909828   79869 cri.go:89] found id: ""
	I0829 19:38:39.909864   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.909874   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:39.909880   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:39.909941   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:39.943492   79869 cri.go:89] found id: ""
	I0829 19:38:39.943513   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.943521   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:39.943528   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:39.943586   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:39.976346   79869 cri.go:89] found id: ""
	I0829 19:38:39.976382   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.976393   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:39.976401   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:39.976455   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:40.008764   79869 cri.go:89] found id: ""
	I0829 19:38:40.008793   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.008803   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:40.008810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:40.008871   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:40.040324   79869 cri.go:89] found id: ""
	I0829 19:38:40.040356   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.040373   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:40.040381   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:40.040448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:40.072836   79869 cri.go:89] found id: ""
	I0829 19:38:40.072867   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.072880   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:40.072888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:40.072938   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:40.105437   79869 cri.go:89] found id: ""
	I0829 19:38:40.105462   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.105470   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:40.105476   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:40.105520   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:40.139447   79869 cri.go:89] found id: ""
	I0829 19:38:40.139480   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.139491   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:40.139502   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:40.139517   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.177799   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:40.177828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:40.227087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:40.227118   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:40.241116   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:40.241139   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:40.305556   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:40.305576   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:40.305590   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:42.886493   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:42.900941   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:42.901013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:42.938904   79869 cri.go:89] found id: ""
	I0829 19:38:42.938925   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.938933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:42.938946   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:42.939012   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:42.975186   79869 cri.go:89] found id: ""
	I0829 19:38:42.975213   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.975221   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:42.975227   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:42.975288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:43.009115   79869 cri.go:89] found id: ""
	I0829 19:38:43.009144   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.009152   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:43.009157   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:43.009207   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:43.044948   79869 cri.go:89] found id: ""
	I0829 19:38:43.044977   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.044987   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:43.044995   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:43.045057   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:43.079699   79869 cri.go:89] found id: ""
	I0829 19:38:43.079725   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.079732   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:43.079739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:43.079804   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:43.113742   79869 cri.go:89] found id: ""
	I0829 19:38:43.113770   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.113780   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:43.113788   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:43.113850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:43.151852   79869 cri.go:89] found id: ""
	I0829 19:38:43.151876   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.151884   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:43.151889   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:43.151939   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:43.190832   79869 cri.go:89] found id: ""
	I0829 19:38:43.190854   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.190862   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:43.190869   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:43.190882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:43.242651   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:43.242683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:43.256378   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:43.256403   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:43.333657   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:43.333684   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:43.333696   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:43.409811   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:43.409850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.170246   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.669492   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.737831   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:45.236831   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:44.930652   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:46.930941   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:45.947709   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:45.960937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:45.961013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:45.993198   79869 cri.go:89] found id: ""
	I0829 19:38:45.993230   79869 logs.go:276] 0 containers: []
	W0829 19:38:45.993242   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:45.993249   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:45.993303   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:46.031110   79869 cri.go:89] found id: ""
	I0829 19:38:46.031137   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.031148   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:46.031157   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:46.031212   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:46.065062   79869 cri.go:89] found id: ""
	I0829 19:38:46.065085   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.065093   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:46.065099   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:46.065155   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:46.099092   79869 cri.go:89] found id: ""
	I0829 19:38:46.099114   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.099122   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:46.099128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:46.099177   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:46.132426   79869 cri.go:89] found id: ""
	I0829 19:38:46.132450   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.132459   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:46.132464   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:46.132517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:46.165289   79869 cri.go:89] found id: ""
	I0829 19:38:46.165320   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.165337   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:46.165346   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:46.165415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:46.198761   79869 cri.go:89] found id: ""
	I0829 19:38:46.198786   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.198793   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:46.198799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:46.198859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:46.230621   79869 cri.go:89] found id: ""
	I0829 19:38:46.230649   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.230659   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:46.230669   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:46.230683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:46.280364   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:46.280398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:46.292854   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:46.292878   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:46.358673   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:46.358694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:46.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:46.439653   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:46.439688   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:44.669939   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:47.168670   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:47.735386   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:49.736163   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:49.431741   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:51.931271   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:48.975568   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:48.988793   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:48.988857   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:49.023697   79869 cri.go:89] found id: ""
	I0829 19:38:49.023721   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.023730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:49.023736   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:49.023791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:49.060131   79869 cri.go:89] found id: ""
	I0829 19:38:49.060153   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.060160   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:49.060166   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:49.060222   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:49.096069   79869 cri.go:89] found id: ""
	I0829 19:38:49.096101   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.096112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:49.096119   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:49.096185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:49.130316   79869 cri.go:89] found id: ""
	I0829 19:38:49.130347   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.130359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:49.130367   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:49.130434   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:49.162853   79869 cri.go:89] found id: ""
	I0829 19:38:49.162877   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.162890   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:49.162896   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:49.162956   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:49.198555   79869 cri.go:89] found id: ""
	I0829 19:38:49.198581   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.198592   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:49.198598   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:49.198663   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:49.232521   79869 cri.go:89] found id: ""
	I0829 19:38:49.232550   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.232560   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:49.232568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:49.232626   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:49.268094   79869 cri.go:89] found id: ""
	I0829 19:38:49.268124   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.268134   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:49.268145   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:49.268161   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:49.320884   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:49.320918   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:49.334244   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:49.334273   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:49.404442   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.404464   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:49.404479   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:49.482413   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:49.482451   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.021406   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:52.035517   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:52.035600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:52.068868   79869 cri.go:89] found id: ""
	I0829 19:38:52.068902   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.068909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:52.068915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:52.068971   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:52.100503   79869 cri.go:89] found id: ""
	I0829 19:38:52.100533   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.100542   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:52.100548   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:52.100620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:52.135148   79869 cri.go:89] found id: ""
	I0829 19:38:52.135189   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.135201   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:52.135208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:52.135276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:52.174469   79869 cri.go:89] found id: ""
	I0829 19:38:52.174498   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.174508   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:52.174516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:52.174576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:52.206485   79869 cri.go:89] found id: ""
	I0829 19:38:52.206508   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.206515   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:52.206520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:52.206568   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:52.240053   79869 cri.go:89] found id: ""
	I0829 19:38:52.240073   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.240080   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:52.240085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:52.240143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:52.274473   79869 cri.go:89] found id: ""
	I0829 19:38:52.274497   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.274506   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:52.274513   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:52.274576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:52.306646   79869 cri.go:89] found id: ""
	I0829 19:38:52.306669   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.306678   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:52.306686   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:52.306698   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:52.383558   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:52.383615   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.421958   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:52.421988   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:52.478024   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:52.478059   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:52.490736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:52.490772   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:52.555670   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.169856   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:51.669655   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:52.236654   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:54.735292   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:53.931350   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.430287   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:58.432418   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:55.056273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:55.068074   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:55.068147   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:55.102268   79869 cri.go:89] found id: ""
	I0829 19:38:55.102298   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.102309   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:55.102317   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:55.102368   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:55.133730   79869 cri.go:89] found id: ""
	I0829 19:38:55.133763   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.133773   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:55.133784   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:55.133848   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:55.168902   79869 cri.go:89] found id: ""
	I0829 19:38:55.168932   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.168942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:55.168949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:55.169015   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:55.206190   79869 cri.go:89] found id: ""
	I0829 19:38:55.206220   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.206231   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:55.206241   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:55.206326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:55.240178   79869 cri.go:89] found id: ""
	I0829 19:38:55.240213   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.240224   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:55.240233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:55.240313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:55.272532   79869 cri.go:89] found id: ""
	I0829 19:38:55.272559   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.272569   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:55.272575   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:55.272636   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:55.305427   79869 cri.go:89] found id: ""
	I0829 19:38:55.305457   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.305467   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:55.305473   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:55.305522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:55.337444   79869 cri.go:89] found id: ""
	I0829 19:38:55.337477   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.337489   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:55.337502   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:55.337518   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:55.402988   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:55.403019   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:55.403034   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:55.479168   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:55.479202   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:55.516345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:55.516372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:55.566716   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:55.566749   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.080261   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:58.093884   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:58.093944   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:58.126772   79869 cri.go:89] found id: ""
	I0829 19:38:58.126799   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.126808   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:58.126814   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:58.126861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:58.158344   79869 cri.go:89] found id: ""
	I0829 19:38:58.158373   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.158385   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:58.158393   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:58.158458   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:58.191524   79869 cri.go:89] found id: ""
	I0829 19:38:58.191550   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.191561   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:58.191569   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:58.191635   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:58.223336   79869 cri.go:89] found id: ""
	I0829 19:38:58.223362   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.223370   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:58.223375   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:58.223433   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:58.256223   79869 cri.go:89] found id: ""
	I0829 19:38:58.256248   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.256256   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:58.256262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:58.256321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:58.290008   79869 cri.go:89] found id: ""
	I0829 19:38:58.290035   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.290044   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:58.290049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:58.290112   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:58.324441   79869 cri.go:89] found id: ""
	I0829 19:38:58.324471   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.324488   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:58.324495   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:58.324554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:58.357324   79869 cri.go:89] found id: ""
	I0829 19:38:58.357351   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.357361   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:58.357378   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:58.357394   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.370251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:58.370277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:58.461098   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:58.461123   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:58.461138   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:58.537222   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:58.537255   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:58.574012   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:58.574043   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:54.170237   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.668188   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:58.668309   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.736467   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:59.236483   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:00.930424   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:02.931161   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:01.125646   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:01.138389   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:01.138464   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:01.172278   79869 cri.go:89] found id: ""
	I0829 19:39:01.172305   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.172313   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:01.172319   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:01.172375   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:01.207408   79869 cri.go:89] found id: ""
	I0829 19:39:01.207444   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.207455   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:01.207462   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:01.207522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:01.242683   79869 cri.go:89] found id: ""
	I0829 19:39:01.242711   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.242721   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:01.242729   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:01.242788   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:01.275683   79869 cri.go:89] found id: ""
	I0829 19:39:01.275714   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.275730   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:01.275738   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:01.275803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:01.308039   79869 cri.go:89] found id: ""
	I0829 19:39:01.308063   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.308071   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:01.308078   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:01.308137   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:01.344382   79869 cri.go:89] found id: ""
	I0829 19:39:01.344406   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.344413   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:01.344418   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:01.344466   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:01.379942   79869 cri.go:89] found id: ""
	I0829 19:39:01.379964   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.379972   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:01.379977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:01.380021   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:01.414955   79869 cri.go:89] found id: ""
	I0829 19:39:01.414981   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.414989   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:01.414997   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:01.415008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:01.469174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:01.469206   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:01.482719   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:01.482743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:01.546713   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:01.546731   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:01.546742   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:01.630655   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:01.630689   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:00.668839   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:02.670762   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:01.236788   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:03.237406   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:05.430398   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.431044   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:04.167940   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:04.180881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:04.180948   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:04.214782   79869 cri.go:89] found id: ""
	I0829 19:39:04.214809   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.214818   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:04.214824   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:04.214878   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:04.248274   79869 cri.go:89] found id: ""
	I0829 19:39:04.248300   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.248309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:04.248316   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:04.248378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:04.280622   79869 cri.go:89] found id: ""
	I0829 19:39:04.280648   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.280657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:04.280681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:04.280749   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:04.313715   79869 cri.go:89] found id: ""
	I0829 19:39:04.313746   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.313754   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:04.313759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:04.313806   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:04.345179   79869 cri.go:89] found id: ""
	I0829 19:39:04.345201   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.345209   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:04.345214   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:04.345264   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:04.377264   79869 cri.go:89] found id: ""
	I0829 19:39:04.377294   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.377304   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:04.377315   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:04.377378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:04.410005   79869 cri.go:89] found id: ""
	I0829 19:39:04.410028   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.410034   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:04.410039   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:04.410109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:04.444345   79869 cri.go:89] found id: ""
	I0829 19:39:04.444373   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.444383   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:04.444393   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:04.444409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:04.488071   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:04.488103   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:04.539394   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:04.539427   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:04.552285   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:04.552320   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:04.620973   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:04.620992   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:04.621006   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.201149   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:07.213392   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:07.213452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:07.249778   79869 cri.go:89] found id: ""
	I0829 19:39:07.249801   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.249812   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:07.249817   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:07.249864   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:07.282763   79869 cri.go:89] found id: ""
	I0829 19:39:07.282792   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.282799   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:07.282805   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:07.282852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:07.316882   79869 cri.go:89] found id: ""
	I0829 19:39:07.316920   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.316932   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:07.316940   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:07.316990   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:07.348474   79869 cri.go:89] found id: ""
	I0829 19:39:07.348505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.348516   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:07.348532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:07.348606   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:07.381442   79869 cri.go:89] found id: ""
	I0829 19:39:07.381467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.381474   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:07.381479   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:07.381535   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:07.414935   79869 cri.go:89] found id: ""
	I0829 19:39:07.414968   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.414981   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:07.414990   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:07.415053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:07.448427   79869 cri.go:89] found id: ""
	I0829 19:39:07.448467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.448479   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:07.448486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:07.448544   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:07.480475   79869 cri.go:89] found id: ""
	I0829 19:39:07.480505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.480515   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:07.480526   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:07.480540   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:07.532732   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:07.532766   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:07.546366   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:07.546411   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:07.615661   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:07.615679   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:07.615690   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.696874   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:07.696909   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:05.169920   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.170223   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:05.735375   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.737017   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:10.235794   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:09.930945   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:11.931285   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:10.236118   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:10.249347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:10.249413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:10.280412   79869 cri.go:89] found id: ""
	I0829 19:39:10.280436   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.280446   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:10.280451   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:10.280499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:10.313091   79869 cri.go:89] found id: ""
	I0829 19:39:10.313119   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.313126   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:10.313132   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:10.313187   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:10.347208   79869 cri.go:89] found id: ""
	I0829 19:39:10.347243   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.347252   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:10.347257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:10.347306   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:10.380658   79869 cri.go:89] found id: ""
	I0829 19:39:10.380686   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.380696   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:10.380703   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:10.380750   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:10.412573   79869 cri.go:89] found id: ""
	I0829 19:39:10.412601   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.412613   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:10.412621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:10.412682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:10.449655   79869 cri.go:89] found id: ""
	I0829 19:39:10.449683   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.449691   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:10.449698   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:10.449759   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:10.485157   79869 cri.go:89] found id: ""
	I0829 19:39:10.485184   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.485195   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:10.485203   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:10.485262   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:10.522628   79869 cri.go:89] found id: ""
	I0829 19:39:10.522656   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.522666   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:10.522673   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:10.522684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:10.541079   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:10.541114   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:10.633462   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:10.633495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:10.633512   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:10.714315   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:10.714354   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:10.751345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:10.751371   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.306786   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:13.319368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:13.319447   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:13.353999   79869 cri.go:89] found id: ""
	I0829 19:39:13.354029   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.354039   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:13.354047   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:13.354124   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:13.386953   79869 cri.go:89] found id: ""
	I0829 19:39:13.386982   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.386992   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:13.387000   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:13.387053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:13.425835   79869 cri.go:89] found id: ""
	I0829 19:39:13.425860   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.425869   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:13.425876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:13.425942   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:13.462808   79869 cri.go:89] found id: ""
	I0829 19:39:13.462835   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.462843   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:13.462849   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:13.462895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:13.495194   79869 cri.go:89] found id: ""
	I0829 19:39:13.495228   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.495240   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:13.495248   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:13.495309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:13.527239   79869 cri.go:89] found id: ""
	I0829 19:39:13.527268   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.527277   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:13.527283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:13.527357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:13.559081   79869 cri.go:89] found id: ""
	I0829 19:39:13.559110   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.559121   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:13.559128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:13.559191   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:13.590723   79869 cri.go:89] found id: ""
	I0829 19:39:13.590748   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.590757   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:13.590767   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:13.590781   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.645718   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:13.645751   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:13.659224   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:13.659250   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:13.733532   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:13.733566   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:13.733580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:09.669065   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:12.169167   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:12.236756   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:14.237536   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:14.431203   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:16.930983   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:13.813639   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:13.813680   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.355269   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:16.377328   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:16.377395   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:16.437904   79869 cri.go:89] found id: ""
	I0829 19:39:16.437926   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.437933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:16.437939   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:16.437987   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:16.470254   79869 cri.go:89] found id: ""
	I0829 19:39:16.470279   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.470287   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:16.470293   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:16.470353   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:16.502125   79869 cri.go:89] found id: ""
	I0829 19:39:16.502165   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.502177   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:16.502186   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:16.502242   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:16.539754   79869 cri.go:89] found id: ""
	I0829 19:39:16.539781   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.539791   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:16.539799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:16.539862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:16.576191   79869 cri.go:89] found id: ""
	I0829 19:39:16.576218   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.576229   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:16.576236   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:16.576292   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:16.610183   79869 cri.go:89] found id: ""
	I0829 19:39:16.610208   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.610219   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:16.610226   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:16.610285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:16.642568   79869 cri.go:89] found id: ""
	I0829 19:39:16.642605   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.642614   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:16.642624   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:16.642689   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:16.675990   79869 cri.go:89] found id: ""
	I0829 19:39:16.676017   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.676025   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:16.676033   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:16.676049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:16.739204   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:16.739222   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:16.739233   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:16.816427   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:16.816460   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.851816   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:16.851850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:16.903922   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:16.903958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:14.169307   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:16.163640   79073 pod_ready.go:82] duration metric: took 4m0.000694226s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" ...
	E0829 19:39:16.163683   79073 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:39:16.163706   79073 pod_ready.go:39] duration metric: took 4m12.036045825s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:16.163738   79073 kubeadm.go:597] duration metric: took 4m20.35086556s to restartPrimaryControlPlane
	W0829 19:39:16.163795   79073 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:16.163827   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:16.736978   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.236047   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.431674   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:21.930447   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.418163   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:19.432617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:19.432676   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:19.464691   79869 cri.go:89] found id: ""
	I0829 19:39:19.464718   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.464730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:19.464737   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:19.464793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:19.496265   79869 cri.go:89] found id: ""
	I0829 19:39:19.496291   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.496302   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:19.496310   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:19.496397   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:19.527395   79869 cri.go:89] found id: ""
	I0829 19:39:19.527422   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.527433   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:19.527440   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:19.527501   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:19.558377   79869 cri.go:89] found id: ""
	I0829 19:39:19.558404   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.558414   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:19.558422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:19.558484   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:19.589687   79869 cri.go:89] found id: ""
	I0829 19:39:19.589710   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.589718   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:19.589724   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:19.589813   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:19.624051   79869 cri.go:89] found id: ""
	I0829 19:39:19.624077   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.624086   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:19.624097   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:19.624143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:19.656248   79869 cri.go:89] found id: ""
	I0829 19:39:19.656282   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.656293   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:19.656301   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:19.656364   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:19.689299   79869 cri.go:89] found id: ""
	I0829 19:39:19.689328   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.689338   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:19.689349   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:19.689365   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:19.739952   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:19.739982   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:19.753169   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:19.753197   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:19.816948   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:19.816971   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:19.816983   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:19.892233   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:19.892270   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.432456   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:22.444842   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:22.444915   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:22.475864   79869 cri.go:89] found id: ""
	I0829 19:39:22.475888   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.475899   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:22.475907   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:22.475954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:22.506824   79869 cri.go:89] found id: ""
	I0829 19:39:22.506851   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.506858   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:22.506864   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:22.506909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:22.544960   79869 cri.go:89] found id: ""
	I0829 19:39:22.544984   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.545002   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:22.545009   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:22.545074   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:22.584077   79869 cri.go:89] found id: ""
	I0829 19:39:22.584098   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.584106   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:22.584114   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:22.584169   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:22.621180   79869 cri.go:89] found id: ""
	I0829 19:39:22.621208   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.621220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:22.621228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:22.621288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:22.658111   79869 cri.go:89] found id: ""
	I0829 19:39:22.658139   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.658151   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:22.658158   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:22.658220   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:22.695654   79869 cri.go:89] found id: ""
	I0829 19:39:22.695679   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.695686   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:22.695692   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:22.695742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:22.733092   79869 cri.go:89] found id: ""
	I0829 19:39:22.733169   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.733184   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:22.733196   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:22.733212   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:22.808449   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:22.808469   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:22.808485   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:22.889239   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:22.889275   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.933487   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:22.933513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:22.983137   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:22.983178   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:21.236189   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:23.236347   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:25.237213   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:23.932634   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:26.431145   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:28.431496   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:25.496668   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:25.509508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:25.509572   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:25.544292   79869 cri.go:89] found id: ""
	I0829 19:39:25.544321   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.544334   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:25.544341   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:25.544400   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:25.576739   79869 cri.go:89] found id: ""
	I0829 19:39:25.576768   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.576779   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:25.576787   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:25.576840   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:25.608040   79869 cri.go:89] found id: ""
	I0829 19:39:25.608067   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.608075   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:25.608081   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:25.608127   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:25.639675   79869 cri.go:89] found id: ""
	I0829 19:39:25.639703   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.639712   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:25.639720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:25.639785   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:25.676966   79869 cri.go:89] found id: ""
	I0829 19:39:25.676995   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.677007   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:25.677014   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:25.677075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:25.712310   79869 cri.go:89] found id: ""
	I0829 19:39:25.712334   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.712341   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:25.712347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:25.712393   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:25.746172   79869 cri.go:89] found id: ""
	I0829 19:39:25.746196   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.746203   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:25.746208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:25.746257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:25.778476   79869 cri.go:89] found id: ""
	I0829 19:39:25.778497   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.778506   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:25.778514   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:25.778525   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:25.817791   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:25.817820   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:25.874597   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:25.874634   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:25.887469   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:25.887493   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:25.957308   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:25.957329   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:25.957348   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:28.536826   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:28.550981   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:28.551038   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:28.586607   79869 cri.go:89] found id: ""
	I0829 19:39:28.586636   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.586647   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:28.586656   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:28.586716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:28.627696   79869 cri.go:89] found id: ""
	I0829 19:39:28.627720   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.627728   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:28.627734   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:28.627793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:28.659877   79869 cri.go:89] found id: ""
	I0829 19:39:28.659906   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.659915   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:28.659920   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:28.659967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:28.694834   79869 cri.go:89] found id: ""
	I0829 19:39:28.694861   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.694868   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:28.694874   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:28.694934   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:28.728833   79869 cri.go:89] found id: ""
	I0829 19:39:28.728866   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.728878   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:28.728888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:28.728951   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:27.237871   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:29.735887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:30.931849   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:33.424593   79559 pod_ready.go:82] duration metric: took 4m0.000177098s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" ...
	E0829 19:39:33.424633   79559 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:39:33.424656   79559 pod_ready.go:39] duration metric: took 4m10.047294609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:33.424687   79559 kubeadm.go:597] duration metric: took 4m17.474785939s to restartPrimaryControlPlane
	W0829 19:39:33.424745   79559 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:33.424773   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:28.762236   79869 cri.go:89] found id: ""
	I0829 19:39:28.762269   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.762279   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:28.762286   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:28.762352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:28.794534   79869 cri.go:89] found id: ""
	I0829 19:39:28.794570   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.794583   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:28.794590   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:28.794660   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:28.827193   79869 cri.go:89] found id: ""
	I0829 19:39:28.827222   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.827233   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:28.827244   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:28.827260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:28.878905   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:28.878936   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:28.891795   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:28.891826   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:28.966249   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:28.966278   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:28.966294   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:29.044383   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:29.044417   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.582383   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:31.595250   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:31.595333   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:31.628763   79869 cri.go:89] found id: ""
	I0829 19:39:31.628791   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.628800   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:31.628805   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:31.628852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:31.663489   79869 cri.go:89] found id: ""
	I0829 19:39:31.663521   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.663531   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:31.663537   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:31.663598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:31.698248   79869 cri.go:89] found id: ""
	I0829 19:39:31.698275   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.698283   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:31.698289   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:31.698340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:31.732499   79869 cri.go:89] found id: ""
	I0829 19:39:31.732527   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.732536   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:31.732544   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:31.732601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:31.773831   79869 cri.go:89] found id: ""
	I0829 19:39:31.773853   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.773861   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:31.773866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:31.773909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:31.807713   79869 cri.go:89] found id: ""
	I0829 19:39:31.807739   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.807747   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:31.807753   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:31.807814   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:31.841846   79869 cri.go:89] found id: ""
	I0829 19:39:31.841874   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.841881   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:31.841887   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:31.841945   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:31.872713   79869 cri.go:89] found id: ""
	I0829 19:39:31.872736   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.872749   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:31.872760   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:31.872773   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:31.926299   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:31.926335   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:31.941134   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:31.941174   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:32.010600   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:32.010623   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:32.010638   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:32.091972   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:32.092008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.737021   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:34.236447   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:34.631695   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:34.644986   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:34.645051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:34.679788   79869 cri.go:89] found id: ""
	I0829 19:39:34.679816   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.679823   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:34.679832   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:34.679881   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:34.713113   79869 cri.go:89] found id: ""
	I0829 19:39:34.713139   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.713147   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:34.713152   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:34.713204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:34.745410   79869 cri.go:89] found id: ""
	I0829 19:39:34.745439   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.745451   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:34.745459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:34.745517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:34.779089   79869 cri.go:89] found id: ""
	I0829 19:39:34.779117   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.779125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:34.779132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:34.779179   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:34.810966   79869 cri.go:89] found id: ""
	I0829 19:39:34.810995   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.811004   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:34.811011   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:34.811075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:34.844859   79869 cri.go:89] found id: ""
	I0829 19:39:34.844894   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.844901   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:34.844907   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:34.844954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:34.876014   79869 cri.go:89] found id: ""
	I0829 19:39:34.876036   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.876044   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:34.876050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:34.876097   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:34.909383   79869 cri.go:89] found id: ""
	I0829 19:39:34.909412   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.909421   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:34.909429   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:34.909440   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:34.956841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:34.956875   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:34.969399   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:34.969423   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:35.034539   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:35.034574   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:35.034589   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:35.109702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:35.109743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:37.644897   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:37.658600   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:37.658665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:37.693604   79869 cri.go:89] found id: ""
	I0829 19:39:37.693638   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.693646   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:37.693655   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:37.693763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:37.727504   79869 cri.go:89] found id: ""
	I0829 19:39:37.727531   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.727538   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:37.727546   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:37.727598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:37.762755   79869 cri.go:89] found id: ""
	I0829 19:39:37.762778   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.762786   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:37.762792   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:37.762838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:37.799571   79869 cri.go:89] found id: ""
	I0829 19:39:37.799600   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.799611   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:37.799619   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:37.799669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:37.833599   79869 cri.go:89] found id: ""
	I0829 19:39:37.833632   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.833644   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:37.833651   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:37.833714   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:37.867877   79869 cri.go:89] found id: ""
	I0829 19:39:37.867901   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.867909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:37.867916   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:37.867968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:37.901439   79869 cri.go:89] found id: ""
	I0829 19:39:37.901467   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.901475   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:37.901480   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:37.901527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:37.936983   79869 cri.go:89] found id: ""
	I0829 19:39:37.937008   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.937016   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:37.937024   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:37.937035   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:38.016873   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:38.016917   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:38.052565   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:38.052605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:38.102174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:38.102210   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:38.115273   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:38.115298   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:38.186012   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:36.736406   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:39.235941   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:42.401382   79073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.237529155s)
	I0829 19:39:42.401460   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:42.428754   79073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:42.441896   79073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:42.456122   79073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:42.456147   79073 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:42.456190   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:42.471887   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:42.471947   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:42.483709   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:42.493000   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:42.493070   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:42.511916   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:42.520829   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:42.520891   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:42.530567   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:42.540199   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:42.540252   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:42.550058   79073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:42.596809   79073 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:39:42.596966   79073 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:42.706623   79073 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:42.706766   79073 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:42.706931   79073 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:39:42.717740   79073 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:40.686558   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:40.699240   79869 kubeadm.go:597] duration metric: took 4m4.589527641s to restartPrimaryControlPlane
	W0829 19:39:40.699313   79869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:40.699343   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:42.719760   79073 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:42.719862   79073 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:42.719929   79073 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:42.720023   79073 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:42.720079   79073 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:42.720144   79073 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:42.720193   79073 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:42.720248   79073 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:42.720315   79073 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:42.720386   79073 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:42.720459   79073 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:42.720496   79073 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:42.720555   79073 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:42.827328   79073 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:43.276222   79073 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:39:43.445594   79073 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:43.554811   79073 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:43.788184   79073 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:43.788791   79073 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:43.791871   79073 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:43.794448   79073 out.go:235]   - Booting up control plane ...
	I0829 19:39:43.794600   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:43.794702   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:43.794800   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:43.813894   79073 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:43.822272   79073 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:43.822357   79073 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:44.450706   79869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.75133723s)
	I0829 19:39:44.450782   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:44.464692   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:44.473894   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:44.483464   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:44.483483   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:44.483524   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:44.492228   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:44.492277   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:44.501349   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:44.510241   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:44.510295   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:44.519210   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.528256   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:44.528314   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.537658   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:44.546976   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:44.547027   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:44.556823   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:44.630397   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:39:44.630474   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:44.771729   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:44.771869   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:44.772018   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:39:44.944512   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:41.236034   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:43.236446   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:45.237605   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:44.947210   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:44.947320   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:44.947422   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:44.947540   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:44.947658   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:44.947781   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:44.947881   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:44.950819   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:44.950926   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:44.951022   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:44.951125   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:44.951174   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:44.951244   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:45.171698   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:45.287539   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:45.443576   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:45.594891   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:45.609143   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:45.610374   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:45.610440   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:45.746839   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:45.748753   79869 out.go:235]   - Booting up control plane ...
	I0829 19:39:45.748882   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:45.753577   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:45.754588   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:45.755463   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:45.760295   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:39:43.950283   79073 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:39:43.950458   79073 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:39:44.452956   79073 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.82915ms
	I0829 19:39:44.453068   79073 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:39:49.455000   79073 kubeadm.go:310] [api-check] The API server is healthy after 5.001789194s
	I0829 19:39:49.473145   79073 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:39:49.496760   79073 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:39:49.530950   79073 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:39:49.531148   79073 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-920571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:39:49.548546   79073 kubeadm.go:310] [bootstrap-token] Using token: bc4428.p8e3szrujohqnvnh
	I0829 19:39:47.735610   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:49.735833   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:49.549992   79073 out.go:235]   - Configuring RBAC rules ...
	I0829 19:39:49.550151   79073 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:39:49.558070   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:39:49.573758   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:39:49.579988   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:39:49.585250   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:39:49.592477   79073 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:39:49.863168   79073 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:39:50.294056   79073 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:39:50.862652   79073 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:39:50.863644   79073 kubeadm.go:310] 
	I0829 19:39:50.863717   79073 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:39:50.863729   79073 kubeadm.go:310] 
	I0829 19:39:50.863861   79073 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:39:50.863881   79073 kubeadm.go:310] 
	I0829 19:39:50.863917   79073 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:39:50.864019   79073 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:39:50.864101   79073 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:39:50.864111   79073 kubeadm.go:310] 
	I0829 19:39:50.864215   79073 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:39:50.864225   79073 kubeadm.go:310] 
	I0829 19:39:50.864298   79073 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:39:50.864312   79073 kubeadm.go:310] 
	I0829 19:39:50.864398   79073 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:39:50.864517   79073 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:39:50.864617   79073 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:39:50.864631   79073 kubeadm.go:310] 
	I0829 19:39:50.864743   79073 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:39:50.864856   79073 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:39:50.864869   79073 kubeadm.go:310] 
	I0829 19:39:50.864983   79073 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bc4428.p8e3szrujohqnvnh \
	I0829 19:39:50.865110   79073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:39:50.865142   79073 kubeadm.go:310] 	--control-plane 
	I0829 19:39:50.865152   79073 kubeadm.go:310] 
	I0829 19:39:50.865262   79073 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:39:50.865270   79073 kubeadm.go:310] 
	I0829 19:39:50.865370   79073 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bc4428.p8e3szrujohqnvnh \
	I0829 19:39:50.865527   79073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:39:50.866485   79073 kubeadm.go:310] W0829 19:39:42.565022    2509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:39:50.866852   79073 kubeadm.go:310] W0829 19:39:42.566073    2509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:39:50.866979   79073 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:39:50.867009   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:39:50.867020   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:39:50.868683   79073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:39:50.869952   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:39:50.880385   79073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:39:50.900028   79073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:39:50.900152   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:50.900187   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-920571 minikube.k8s.io/updated_at=2024_08_29T19_39_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=embed-certs-920571 minikube.k8s.io/primary=true
	I0829 19:39:51.090710   79073 ops.go:34] apiserver oom_adj: -16
	I0829 19:39:51.090865   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:51.591720   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:52.091579   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:52.591872   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:53.091671   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:53.591191   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.091640   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.591356   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.674005   79073 kubeadm.go:1113] duration metric: took 3.773916232s to wait for elevateKubeSystemPrivileges
	I0829 19:39:54.674046   79073 kubeadm.go:394] duration metric: took 4m58.910639816s to StartCluster
	I0829 19:39:54.674070   79073 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:39:54.674178   79073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:39:54.675789   79073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:39:54.676038   79073 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:39:54.676095   79073 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:39:54.676184   79073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-920571"
	I0829 19:39:54.676210   79073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-920571"
	I0829 19:39:54.676222   79073 addons.go:69] Setting metrics-server=true in profile "embed-certs-920571"
	I0829 19:39:54.676225   79073 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:39:54.676241   79073 addons.go:234] Setting addon metrics-server=true in "embed-certs-920571"
	I0829 19:39:54.676264   79073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-920571"
	I0829 19:39:54.676216   79073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-920571"
	W0829 19:39:54.676329   79073 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:39:54.676360   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	W0829 19:39:54.676392   79073 addons.go:243] addon metrics-server should already be in state true
	I0829 19:39:54.676455   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	I0829 19:39:54.676650   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676664   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676682   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.676684   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.676824   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676859   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.677794   79073 out.go:177] * Verifying Kubernetes components...
	I0829 19:39:54.679112   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:39:54.694669   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0829 19:39:54.694717   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0829 19:39:54.695090   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.695420   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.695532   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35861
	I0829 19:39:54.695640   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.695656   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.695925   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.695948   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.695951   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.696038   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.696266   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.696373   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.696392   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.696443   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.696600   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.696629   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.696745   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.697378   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.697413   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.702955   79073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-920571"
	W0829 19:39:54.702978   79073 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:39:54.703003   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	I0829 19:39:54.703347   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.703377   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.714194   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0829 19:39:54.714526   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0829 19:39:54.714735   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.714916   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.715368   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.715369   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.715389   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.715401   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.715712   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.715713   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.715944   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.715943   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.717556   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.717758   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.718972   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39097
	I0829 19:39:54.719212   79073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:39:54.719303   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.719212   79073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:39:52.236231   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:54.238843   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:54.719723   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.719735   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.720033   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.720307   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:39:54.720322   79073 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:39:54.720342   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.720533   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.720559   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.720952   79073 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:39:54.720975   79073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:39:54.720992   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.723754   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724174   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.724198   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724516   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.724684   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.724820   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.724879   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724973   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.725426   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.725466   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.725687   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.725827   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.725982   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.726117   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.743443   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37853
	I0829 19:39:54.744025   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.744590   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.744618   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.744908   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.745030   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.746560   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.746809   79073 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:39:54.746819   79073 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:39:54.746831   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.749422   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.749802   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.749827   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.749904   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.750058   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.750206   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.750320   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.902922   79073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:39:54.921933   79073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-920571" to be "Ready" ...
	I0829 19:39:54.936483   79073 node_ready.go:49] node "embed-certs-920571" has status "Ready":"True"
	I0829 19:39:54.936513   79073 node_ready.go:38] duration metric: took 14.542582ms for node "embed-certs-920571" to be "Ready" ...
	I0829 19:39:54.936524   79073 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:54.945389   79073 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:55.076394   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:39:55.076421   79073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:39:55.089140   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:39:55.096473   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:39:55.128207   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:39:55.128235   79073 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:39:55.186402   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:39:55.186429   79073 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:39:55.262731   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:39:55.548177   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.548217   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.548521   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.548542   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:55.548555   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.548564   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.548824   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.548857   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Closing plugin on server side
	I0829 19:39:55.548872   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:55.555956   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.555971   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.556210   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.556227   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.020289   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.020317   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.020610   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.020632   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.020642   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.020650   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.020912   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.020931   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.369749   79073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.106975723s)
	I0829 19:39:56.369809   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.369825   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.370119   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.370143   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.370154   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.370168   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.370407   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.370428   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.370440   79073 addons.go:475] Verifying addon metrics-server=true in "embed-certs-920571"
	I0829 19:39:56.373030   79073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 19:39:56.374322   79073 addons.go:510] duration metric: took 1.698226444s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 19:39:56.460329   79073 pod_ready.go:93] pod "etcd-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:39:56.460362   79073 pod_ready.go:82] duration metric: took 1.51494335s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:56.460375   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:58.467017   79073 pod_ready.go:93] pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:39:58.467040   79073 pod_ready.go:82] duration metric: took 2.006657264s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:58.467050   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:59.826535   79559 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.4017346s)
	I0829 19:39:59.826609   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:59.849311   79559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:59.859855   79559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:59.874237   79559 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:59.874262   79559 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:59.874315   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 19:39:59.883719   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:59.883785   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:59.893307   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 19:39:59.902478   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:59.902519   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:59.912664   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 19:39:59.932387   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:59.932443   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:59.948358   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 19:39:59.965812   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:59.965867   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:59.975437   79559 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:40:00.022167   79559 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:40:00.022347   79559 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:40:00.126622   79559 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:40:00.126777   79559 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:40:00.126914   79559 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:40:00.135123   79559 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:56.736712   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:59.235639   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:00.137714   79559 out.go:235]   - Generating certificates and keys ...
	I0829 19:40:00.137806   79559 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:40:00.137875   79559 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:40:00.138003   79559 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:40:00.138114   79559 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:40:00.138184   79559 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:40:00.138240   79559 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:40:00.138297   79559 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:40:00.138351   79559 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:40:00.138443   79559 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:40:00.138555   79559 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:40:00.138607   79559 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:40:00.138682   79559 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:40:00.368674   79559 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:40:00.454426   79559 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:40:00.576835   79559 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:40:00.650342   79559 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:40:01.038392   79559 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:40:01.038806   79559 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:40:01.041297   79559 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:40:01.043020   79559 out.go:235]   - Booting up control plane ...
	I0829 19:40:01.043127   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:40:01.043224   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:40:01.043501   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:40:01.062342   79559 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:40:01.068185   79559 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:40:01.068247   79559 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:40:01.202906   79559 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:40:01.203076   79559 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:40:01.705241   79559 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.336154ms
	I0829 19:40:01.705368   79559 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:40:00.476336   79073 pod_ready.go:103] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:02.973188   79073 pod_ready.go:103] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:03.473576   79073 pod_ready.go:93] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.473607   79073 pod_ready.go:82] duration metric: took 5.006550689s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.473616   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25cmq" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.478026   79073 pod_ready.go:93] pod "kube-proxy-25cmq" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.478045   79073 pod_ready.go:82] duration metric: took 4.423884ms for pod "kube-proxy-25cmq" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.478054   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.482541   79073 pod_ready.go:93] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.482560   79073 pod_ready.go:82] duration metric: took 4.499742ms for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.482566   79073 pod_ready.go:39] duration metric: took 8.54603076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:03.482581   79073 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:40:03.482623   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:40:03.502670   79073 api_server.go:72] duration metric: took 8.826595134s to wait for apiserver process to appear ...
	I0829 19:40:03.502695   79073 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:40:03.502718   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:40:03.507953   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0829 19:40:03.508948   79073 api_server.go:141] control plane version: v1.31.0
	I0829 19:40:03.508968   79073 api_server.go:131] duration metric: took 6.265433ms to wait for apiserver health ...
	I0829 19:40:03.508977   79073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:40:03.514929   79073 system_pods.go:59] 9 kube-system pods found
	I0829 19:40:03.514962   79073 system_pods.go:61] "coredns-6f6b679f8f-8qrn6" [af312704-4ea9-432d-85b2-67c59231187f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:03.514971   79073 system_pods.go:61] "coredns-6f6b679f8f-9f75n" [80f3b51d-fced-4cd9-8c43-2a4eea28e470] Running
	I0829 19:40:03.514979   79073 system_pods.go:61] "etcd-embed-certs-920571" [47af52f8-1d18-41fd-b013-e53fe813e4cf] Running
	I0829 19:40:03.514987   79073 system_pods.go:61] "kube-apiserver-embed-certs-920571" [1e9c3d77-55e1-4998-a2af-430254fde431] Running
	I0829 19:40:03.514994   79073 system_pods.go:61] "kube-controller-manager-embed-certs-920571" [08b33c9f-5858-41b7-a190-f34b611203ee] Running
	I0829 19:40:03.515000   79073 system_pods.go:61] "kube-proxy-25cmq" [35ecfe58-b448-4db0-b4cc-434422ec4ca6] Running
	I0829 19:40:03.515009   79073 system_pods.go:61] "kube-scheduler-embed-certs-920571" [ea37ea4f-390d-41d1-83b2-72fe1a09302b] Running
	I0829 19:40:03.515018   79073 system_pods.go:61] "metrics-server-6867b74b74-kb2c6" [8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:03.515027   79073 system_pods.go:61] "storage-provisioner" [741481e5-8e38-4522-a9df-4b36e6d5cf9c] Running
	I0829 19:40:03.515036   79073 system_pods.go:74] duration metric: took 6.052187ms to wait for pod list to return data ...
	I0829 19:40:03.515046   79073 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:40:03.518040   79073 default_sa.go:45] found service account: "default"
	I0829 19:40:03.518060   79073 default_sa.go:55] duration metric: took 3.004653ms for default service account to be created ...
	I0829 19:40:03.518069   79073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:40:03.523915   79073 system_pods.go:86] 9 kube-system pods found
	I0829 19:40:03.523942   79073 system_pods.go:89] "coredns-6f6b679f8f-8qrn6" [af312704-4ea9-432d-85b2-67c59231187f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:03.523949   79073 system_pods.go:89] "coredns-6f6b679f8f-9f75n" [80f3b51d-fced-4cd9-8c43-2a4eea28e470] Running
	I0829 19:40:03.523954   79073 system_pods.go:89] "etcd-embed-certs-920571" [47af52f8-1d18-41fd-b013-e53fe813e4cf] Running
	I0829 19:40:03.523958   79073 system_pods.go:89] "kube-apiserver-embed-certs-920571" [1e9c3d77-55e1-4998-a2af-430254fde431] Running
	I0829 19:40:03.523962   79073 system_pods.go:89] "kube-controller-manager-embed-certs-920571" [08b33c9f-5858-41b7-a190-f34b611203ee] Running
	I0829 19:40:03.523965   79073 system_pods.go:89] "kube-proxy-25cmq" [35ecfe58-b448-4db0-b4cc-434422ec4ca6] Running
	I0829 19:40:03.523968   79073 system_pods.go:89] "kube-scheduler-embed-certs-920571" [ea37ea4f-390d-41d1-83b2-72fe1a09302b] Running
	I0829 19:40:03.523973   79073 system_pods.go:89] "metrics-server-6867b74b74-kb2c6" [8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:03.523978   79073 system_pods.go:89] "storage-provisioner" [741481e5-8e38-4522-a9df-4b36e6d5cf9c] Running
	I0829 19:40:03.523986   79073 system_pods.go:126] duration metric: took 5.911567ms to wait for k8s-apps to be running ...
	I0829 19:40:03.523997   79073 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:40:03.524049   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:03.541502   79073 system_svc.go:56] duration metric: took 17.4955ms WaitForService to wait for kubelet
	I0829 19:40:03.541538   79073 kubeadm.go:582] duration metric: took 8.865466463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:40:03.541564   79073 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:40:03.544700   79073 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:40:03.544728   79073 node_conditions.go:123] node cpu capacity is 2
	I0829 19:40:03.544744   79073 node_conditions.go:105] duration metric: took 3.172559ms to run NodePressure ...
	I0829 19:40:03.544758   79073 start.go:241] waiting for startup goroutines ...
	I0829 19:40:03.544771   79073 start.go:246] waiting for cluster config update ...
	I0829 19:40:03.544789   79073 start.go:255] writing updated cluster config ...
	I0829 19:40:03.545136   79073 ssh_runner.go:195] Run: rm -f paused
	I0829 19:40:03.609413   79073 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:40:03.611490   79073 out.go:177] * Done! kubectl is now configured to use "embed-certs-920571" cluster and "default" namespace by default
	I0829 19:40:01.236210   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:03.236420   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:05.237141   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:06.707891   79559 kubeadm.go:310] [api-check] The API server is healthy after 5.002523987s
	I0829 19:40:06.719470   79559 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:40:06.733886   79559 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:40:06.759672   79559 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:40:06.759933   79559 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-672127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:40:06.771514   79559 kubeadm.go:310] [bootstrap-token] Using token: fzav4x.eeztheucmrep51py
	I0829 19:40:06.772887   79559 out.go:235]   - Configuring RBAC rules ...
	I0829 19:40:06.773014   79559 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:40:06.778644   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:40:06.792388   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:40:06.798560   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:40:06.801930   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:40:06.805767   79559 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:40:07.119680   79559 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:40:07.536660   79559 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:40:08.115528   79559 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:40:08.115550   79559 kubeadm.go:310] 
	I0829 19:40:08.115621   79559 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:40:08.115657   79559 kubeadm.go:310] 
	I0829 19:40:08.115780   79559 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:40:08.115802   79559 kubeadm.go:310] 
	I0829 19:40:08.115843   79559 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:40:08.115929   79559 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:40:08.116002   79559 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:40:08.116011   79559 kubeadm.go:310] 
	I0829 19:40:08.116087   79559 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:40:08.116099   79559 kubeadm.go:310] 
	I0829 19:40:08.116154   79559 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:40:08.116173   79559 kubeadm.go:310] 
	I0829 19:40:08.116247   79559 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:40:08.116386   79559 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:40:08.116477   79559 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:40:08.116487   79559 kubeadm.go:310] 
	I0829 19:40:08.116599   79559 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:40:08.116705   79559 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:40:08.116712   79559 kubeadm.go:310] 
	I0829 19:40:08.116779   79559 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token fzav4x.eeztheucmrep51py \
	I0829 19:40:08.116879   79559 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:40:08.116931   79559 kubeadm.go:310] 	--control-plane 
	I0829 19:40:08.116947   79559 kubeadm.go:310] 
	I0829 19:40:08.117048   79559 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:40:08.117058   79559 kubeadm.go:310] 
	I0829 19:40:08.117154   79559 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token fzav4x.eeztheucmrep51py \
	I0829 19:40:08.117270   79559 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:40:08.118512   79559 kubeadm.go:310] W0829 19:39:59.991394    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:08.118870   79559 kubeadm.go:310] W0829 19:39:59.992249    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:08.118981   79559 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:40:08.119009   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:40:08.119019   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:40:08.120832   79559 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:40:08.122029   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:40:08.133326   79559 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:40:08.150808   79559 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:40:08.150867   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:08.150884   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-672127 minikube.k8s.io/updated_at=2024_08_29T19_40_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=default-k8s-diff-port-672127 minikube.k8s.io/primary=true
	I0829 19:40:08.170047   79559 ops.go:34] apiserver oom_adj: -16
	I0829 19:40:08.350103   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:07.736119   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:10.236910   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:08.850762   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:09.350244   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:09.850222   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:10.350462   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:10.850237   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:11.350179   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:11.851033   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:12.351069   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:12.442963   79559 kubeadm.go:1113] duration metric: took 4.29215456s to wait for elevateKubeSystemPrivileges
	I0829 19:40:12.442998   79559 kubeadm.go:394] duration metric: took 4m56.544013459s to StartCluster
	I0829 19:40:12.443020   79559 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:40:12.443110   79559 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:40:12.444757   79559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:40:12.444998   79559 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:40:12.445061   79559 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:40:12.445138   79559 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445151   79559 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445173   79559 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.445181   79559 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:40:12.445179   79559 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-672127"
	I0829 19:40:12.445210   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.445210   79559 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:40:12.445266   79559 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445313   79559 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.445323   79559 addons.go:243] addon metrics-server should already be in state true
	I0829 19:40:12.445347   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.445625   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445658   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.445662   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445683   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.445737   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445775   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.446414   79559 out.go:177] * Verifying Kubernetes components...
	I0829 19:40:12.447652   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:40:12.461386   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0829 19:40:12.461436   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I0829 19:40:12.461805   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.461831   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.462057   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I0829 19:40:12.462324   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462327   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462341   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462347   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462373   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.462701   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.462798   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.462807   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462817   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462886   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.463109   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.463360   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.463392   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.463586   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.463607   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.465961   79559 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.465971   79559 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:40:12.465991   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.466309   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.466355   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.480989   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43705
	I0829 19:40:12.481216   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44477
	I0829 19:40:12.481407   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.481639   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.481843   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.481858   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.482222   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.482249   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.482291   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.482440   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.482576   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.482745   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.484681   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.485336   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.485664   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0829 19:40:12.486377   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.486547   79559 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:40:12.486922   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.486945   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.487310   79559 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:40:12.487586   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.488042   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:40:12.488060   79559 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:40:12.488081   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.488266   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.488307   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.488874   79559 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:40:12.488897   79559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:40:12.488914   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.492291   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.492699   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.492814   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.492844   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.493059   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.493128   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.493144   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.493259   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.493300   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.493432   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.493471   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.493822   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.493972   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.494114   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.505220   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0829 19:40:12.505690   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.506337   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.506363   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.506727   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.506899   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.508602   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.508796   79559 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:40:12.508810   79559 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:40:12.508829   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.511310   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.511660   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.511691   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.511815   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.511969   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.512110   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.512253   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.642279   79559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:40:12.666598   79559 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-672127" to be "Ready" ...
	I0829 19:40:12.682873   79559 node_ready.go:49] node "default-k8s-diff-port-672127" has status "Ready":"True"
	I0829 19:40:12.682895   79559 node_ready.go:38] duration metric: took 16.267143ms for node "default-k8s-diff-port-672127" to be "Ready" ...
	I0829 19:40:12.682904   79559 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:12.693451   79559 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:12.736525   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:40:12.736548   79559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:40:12.754764   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:40:12.754786   79559 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:40:12.806826   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:40:12.806856   79559 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:40:12.817164   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:40:12.837896   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:40:12.903140   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:40:14.124266   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.307063383s)
	I0829 19:40:14.124305   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.286373382s)
	I0829 19:40:14.124324   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124343   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124368   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124430   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.221258684s)
	I0829 19:40:14.124473   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124487   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124635   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124649   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124659   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124667   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124794   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.124813   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.124831   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124848   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124856   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124873   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124864   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124882   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124896   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124902   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124913   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124935   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.125356   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.125359   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.125381   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.126568   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.126637   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.126656   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.126704   79559 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-672127"
	I0829 19:40:14.193216   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.193238   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.193544   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.193562   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.195467   79559 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0829 19:40:12.237641   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:14.736679   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:14.196698   79559 addons.go:510] duration metric: took 1.751639165s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0829 19:40:14.720042   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:17.199482   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:17.235908   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.735901   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.199705   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.699776   79559 pod_ready.go:93] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:19.699801   79559 pod_ready.go:82] duration metric: took 7.006327617s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.699810   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.704240   79559 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:19.704261   79559 pod_ready.go:82] duration metric: took 4.444744ms for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.704269   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.710740   79559 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.710761   79559 pod_ready.go:82] duration metric: took 2.006486043s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.710770   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nqbn4" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.715111   79559 pod_ready.go:93] pod "kube-proxy-nqbn4" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.715134   79559 pod_ready.go:82] duration metric: took 4.357535ms for pod "kube-proxy-nqbn4" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.715146   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.719192   79559 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.719207   79559 pod_ready.go:82] duration metric: took 4.054087ms for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.719222   79559 pod_ready.go:39] duration metric: took 9.036299009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:21.719234   79559 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:40:21.719289   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:40:21.734507   79559 api_server.go:72] duration metric: took 9.289477227s to wait for apiserver process to appear ...
	I0829 19:40:21.734531   79559 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:40:21.734555   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:40:21.739963   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 200:
	ok
	I0829 19:40:21.740847   79559 api_server.go:141] control plane version: v1.31.0
	I0829 19:40:21.740865   79559 api_server.go:131] duration metric: took 6.327694ms to wait for apiserver health ...
	I0829 19:40:21.740872   79559 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:40:21.747609   79559 system_pods.go:59] 9 kube-system pods found
	I0829 19:40:21.747636   79559 system_pods.go:61] "coredns-6f6b679f8f-5p2vn" [8f7749c2-4cb3-4372-8144-46109f9b89b7] Running
	I0829 19:40:21.747643   79559 system_pods.go:61] "coredns-6f6b679f8f-dxbt5" [84373054-e72e-469a-bf2f-101943117851] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:21.747648   79559 system_pods.go:61] "etcd-default-k8s-diff-port-672127" [aacacabd-96ed-4635-b550-0bff36ec6c36] Running
	I0829 19:40:21.747654   79559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-672127" [6cf0f710-c339-438e-a572-2e8c498fa63a] Running
	I0829 19:40:21.747659   79559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-672127" [aaa934bc-c350-4318-9246-a2bb4b7d77f1] Running
	I0829 19:40:21.747662   79559 system_pods.go:61] "kube-proxy-nqbn4" [c5b48a1f-725b-45b7-8a3f-0df0f3371d2f] Running
	I0829 19:40:21.747665   79559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-672127" [fcf1ef19-acf4-41f2-a288-cfa8f978961b] Running
	I0829 19:40:21.747670   79559 system_pods.go:61] "metrics-server-6867b74b74-4p8qr" [8026c5c8-9f02-45a1-8cc8-9d485dc49cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:21.747674   79559 system_pods.go:61] "storage-provisioner" [5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00] Running
	I0829 19:40:21.747680   79559 system_pods.go:74] duration metric: took 6.803459ms to wait for pod list to return data ...
	I0829 19:40:21.747689   79559 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:40:21.750153   79559 default_sa.go:45] found service account: "default"
	I0829 19:40:21.750168   79559 default_sa.go:55] duration metric: took 2.474593ms for default service account to be created ...
	I0829 19:40:21.750175   79559 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:40:21.901186   79559 system_pods.go:86] 9 kube-system pods found
	I0829 19:40:21.901213   79559 system_pods.go:89] "coredns-6f6b679f8f-5p2vn" [8f7749c2-4cb3-4372-8144-46109f9b89b7] Running
	I0829 19:40:21.901219   79559 system_pods.go:89] "coredns-6f6b679f8f-dxbt5" [84373054-e72e-469a-bf2f-101943117851] Running
	I0829 19:40:21.901222   79559 system_pods.go:89] "etcd-default-k8s-diff-port-672127" [aacacabd-96ed-4635-b550-0bff36ec6c36] Running
	I0829 19:40:21.901227   79559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-672127" [6cf0f710-c339-438e-a572-2e8c498fa63a] Running
	I0829 19:40:21.901231   79559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-672127" [aaa934bc-c350-4318-9246-a2bb4b7d77f1] Running
	I0829 19:40:21.901235   79559 system_pods.go:89] "kube-proxy-nqbn4" [c5b48a1f-725b-45b7-8a3f-0df0f3371d2f] Running
	I0829 19:40:21.901238   79559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-672127" [fcf1ef19-acf4-41f2-a288-cfa8f978961b] Running
	I0829 19:40:21.901245   79559 system_pods.go:89] "metrics-server-6867b74b74-4p8qr" [8026c5c8-9f02-45a1-8cc8-9d485dc49cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:21.901249   79559 system_pods.go:89] "storage-provisioner" [5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00] Running
	I0829 19:40:21.901257   79559 system_pods.go:126] duration metric: took 151.07798ms to wait for k8s-apps to be running ...
	I0829 19:40:21.901263   79559 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:40:21.901306   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:21.916730   79559 system_svc.go:56] duration metric: took 15.457902ms WaitForService to wait for kubelet
	I0829 19:40:21.916757   79559 kubeadm.go:582] duration metric: took 9.471732105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:40:21.916773   79559 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:40:22.099083   79559 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:40:22.099119   79559 node_conditions.go:123] node cpu capacity is 2
	I0829 19:40:22.099133   79559 node_conditions.go:105] duration metric: took 182.354927ms to run NodePressure ...
	I0829 19:40:22.099147   79559 start.go:241] waiting for startup goroutines ...
	I0829 19:40:22.099156   79559 start.go:246] waiting for cluster config update ...
	I0829 19:40:22.099168   79559 start.go:255] writing updated cluster config ...
	I0829 19:40:22.099536   79559 ssh_runner.go:195] Run: rm -f paused
	I0829 19:40:22.148307   79559 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:40:22.150361   79559 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-672127" cluster and "default" namespace by default
	I0829 19:40:21.736121   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:23.229905   78865 pod_ready.go:82] duration metric: took 4m0.000141946s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" ...
	E0829 19:40:23.229943   78865 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:40:23.229991   78865 pod_ready.go:39] duration metric: took 4m10.70989222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:23.230021   78865 kubeadm.go:597] duration metric: took 4m18.600330645s to restartPrimaryControlPlane
	W0829 19:40:23.230078   78865 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:40:23.230136   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:40:25.762989   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:40:25.763689   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:25.763863   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:30.764613   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:30.764821   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:40.765517   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:40.765752   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:49.374221   78865 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.144057875s)
	I0829 19:40:49.374297   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:49.389586   78865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:40:49.399146   78865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:40:49.408450   78865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:40:49.408469   78865 kubeadm.go:157] found existing configuration files:
	
	I0829 19:40:49.408521   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:40:49.417651   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:40:49.417706   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:40:49.427073   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:40:49.435307   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:40:49.435356   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:40:49.443720   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:40:49.452437   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:40:49.452493   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:40:49.461133   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:40:49.469515   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:40:49.469564   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:40:49.478224   78865 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:40:49.523193   78865 kubeadm.go:310] W0829 19:40:49.504457    3026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:49.523801   78865 kubeadm.go:310] W0829 19:40:49.505165    3026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:49.640221   78865 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:40:57.429227   78865 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:40:57.429293   78865 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:40:57.429396   78865 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:40:57.429536   78865 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:40:57.429665   78865 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:40:57.429757   78865 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:40:57.431358   78865 out.go:235]   - Generating certificates and keys ...
	I0829 19:40:57.431434   78865 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:40:57.431485   78865 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:40:57.431566   78865 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:40:57.431640   78865 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:40:57.431711   78865 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:40:57.431786   78865 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:40:57.431847   78865 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:40:57.431893   78865 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:40:57.431956   78865 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:40:57.432013   78865 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:40:57.432052   78865 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:40:57.432109   78865 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:40:57.432186   78865 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:40:57.432275   78865 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:40:57.432352   78865 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:40:57.432444   78865 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:40:57.432518   78865 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:40:57.432595   78865 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:40:57.432648   78865 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:40:57.434057   78865 out.go:235]   - Booting up control plane ...
	I0829 19:40:57.434161   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:40:57.434245   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:40:57.434298   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:40:57.434396   78865 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:40:57.434475   78865 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:40:57.434509   78865 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:40:57.434687   78865 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:40:57.434772   78865 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:40:57.434824   78865 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 509.075612ms
	I0829 19:40:57.434887   78865 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:40:57.434932   78865 kubeadm.go:310] [api-check] The API server is healthy after 5.002117161s
	I0829 19:40:57.435094   78865 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:40:57.435232   78865 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:40:57.435284   78865 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:40:57.435429   78865 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-690795 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:40:57.435472   78865 kubeadm.go:310] [bootstrap-token] Using token: adxyev.rcmf9k5ok190h0g1
	I0829 19:40:57.436846   78865 out.go:235]   - Configuring RBAC rules ...
	I0829 19:40:57.436936   78865 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:40:57.437001   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:40:57.437113   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:40:57.437214   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:40:57.437307   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:40:57.437380   78865 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:40:57.437480   78865 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:40:57.437528   78865 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:40:57.437577   78865 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:40:57.437583   78865 kubeadm.go:310] 
	I0829 19:40:57.437635   78865 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:40:57.437641   78865 kubeadm.go:310] 
	I0829 19:40:57.437704   78865 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:40:57.437710   78865 kubeadm.go:310] 
	I0829 19:40:57.437744   78865 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:40:57.437807   78865 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:40:57.437851   78865 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:40:57.437857   78865 kubeadm.go:310] 
	I0829 19:40:57.437907   78865 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:40:57.437913   78865 kubeadm.go:310] 
	I0829 19:40:57.437951   78865 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:40:57.437957   78865 kubeadm.go:310] 
	I0829 19:40:57.438000   78865 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:40:57.438107   78865 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:40:57.438188   78865 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:40:57.438200   78865 kubeadm.go:310] 
	I0829 19:40:57.438289   78865 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:40:57.438359   78865 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:40:57.438364   78865 kubeadm.go:310] 
	I0829 19:40:57.438429   78865 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token adxyev.rcmf9k5ok190h0g1 \
	I0829 19:40:57.438507   78865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:40:57.438525   78865 kubeadm.go:310] 	--control-plane 
	I0829 19:40:57.438534   78865 kubeadm.go:310] 
	I0829 19:40:57.438611   78865 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:40:57.438621   78865 kubeadm.go:310] 
	I0829 19:40:57.438688   78865 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token adxyev.rcmf9k5ok190h0g1 \
	I0829 19:40:57.438791   78865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:40:57.438814   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:40:57.438825   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:40:57.440836   78865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:40:57.442065   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:40:57.452700   78865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:40:57.469549   78865 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:40:57.469621   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:57.469656   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-690795 minikube.k8s.io/updated_at=2024_08_29T19_40_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=no-preload-690795 minikube.k8s.io/primary=true
	I0829 19:40:57.503411   78865 ops.go:34] apiserver oom_adj: -16
	I0829 19:40:57.648807   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:58.149067   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:58.649770   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:59.148932   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:59.649114   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:00.149833   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:00.649474   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.149795   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.649154   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.745084   78865 kubeadm.go:1113] duration metric: took 4.275525047s to wait for elevateKubeSystemPrivileges
	I0829 19:41:01.745117   78865 kubeadm.go:394] duration metric: took 4m57.169926854s to StartCluster
	I0829 19:41:01.745134   78865 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:41:01.745209   78865 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:41:01.746775   78865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:41:01.747005   78865 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:41:01.747062   78865 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:41:01.747155   78865 addons.go:69] Setting storage-provisioner=true in profile "no-preload-690795"
	I0829 19:41:01.747175   78865 addons.go:69] Setting default-storageclass=true in profile "no-preload-690795"
	I0829 19:41:01.747189   78865 addons.go:234] Setting addon storage-provisioner=true in "no-preload-690795"
	W0829 19:41:01.747199   78865 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:41:01.747200   78865 addons.go:69] Setting metrics-server=true in profile "no-preload-690795"
	I0829 19:41:01.747240   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.747246   78865 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:41:01.747243   78865 addons.go:234] Setting addon metrics-server=true in "no-preload-690795"
	W0829 19:41:01.747307   78865 addons.go:243] addon metrics-server should already be in state true
	I0829 19:41:01.747333   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.747206   78865 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-690795"
	I0829 19:41:01.747652   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747670   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747678   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.747702   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.747780   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747810   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.748790   78865 out.go:177] * Verifying Kubernetes components...
	I0829 19:41:01.750069   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:41:01.764006   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0829 19:41:01.765511   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.766194   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.766218   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.766287   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43301
	I0829 19:41:01.766670   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.766694   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.766912   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.766965   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I0829 19:41:01.767129   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.767149   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.767304   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.767506   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.767737   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.767755   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.768073   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.768202   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.768241   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.768615   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.768646   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.771065   78865 addons.go:234] Setting addon default-storageclass=true in "no-preload-690795"
	W0829 19:41:01.771088   78865 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:41:01.771117   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.771415   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.771441   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.787271   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I0829 19:41:01.788003   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.788577   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.788606   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.788885   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0829 19:41:01.789065   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0829 19:41:01.789073   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.789361   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.789716   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.789754   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.789754   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.789774   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.790084   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.790243   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.790319   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.791018   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.791029   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.791393   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.791721   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.792306   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.793557   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.793806   78865 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:41:01.794942   78865 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:41:01.795033   78865 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:41:01.795049   78865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:41:01.795067   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.796032   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:41:01.796048   78865 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:41:01.796065   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.799646   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800163   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800618   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.800644   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800826   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.800843   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800941   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.801043   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.801114   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.801184   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.801239   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.801363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.801367   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.801484   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.807187   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36385
	I0829 19:41:01.807604   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.808056   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.808070   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.808471   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.808671   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.810374   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.810569   78865 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:41:01.810579   78865 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:41:01.810591   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.813314   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.813766   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.813776   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.814029   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.814187   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.814292   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.814379   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.963011   78865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:41:01.981935   78865 node_ready.go:35] waiting up to 6m0s for node "no-preload-690795" to be "Ready" ...
	I0829 19:41:01.998366   78865 node_ready.go:49] node "no-preload-690795" has status "Ready":"True"
	I0829 19:41:01.998389   78865 node_ready.go:38] duration metric: took 16.418591ms for node "no-preload-690795" to be "Ready" ...
	I0829 19:41:01.998398   78865 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:41:02.005811   78865 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:02.053495   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:41:02.197657   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:41:02.239853   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:41:02.239877   78865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:41:02.270764   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:41:02.270789   78865 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:41:02.327819   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:41:02.327853   78865 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:41:02.380812   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.380843   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.381117   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.381191   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:02.381209   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.381217   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.381432   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.381444   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:02.384211   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:41:02.387013   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.387027   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.387286   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:02.387333   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.387345   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.027502   78865 pod_ready.go:93] pod "etcd-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:03.027535   78865 pod_ready.go:82] duration metric: took 1.02170157s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:03.027550   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:03.410428   78865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212715771s)
	I0829 19:41:03.410485   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.410503   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.412586   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.412590   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.412614   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.412625   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.412632   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.412926   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.412947   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.412954   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.587379   78865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.203116606s)
	I0829 19:41:03.587437   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.587452   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.587770   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.587840   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.587859   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.587874   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.587878   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.588185   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.588206   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.588218   78865 addons.go:475] Verifying addon metrics-server=true in "no-preload-690795"
	I0829 19:41:03.588192   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.590131   78865 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 19:41:00.767158   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:00.767429   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:03.591280   78865 addons.go:510] duration metric: took 1.844219817s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 19:41:05.035315   78865 pod_ready.go:103] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"False"
	I0829 19:41:06.033037   78865 pod_ready.go:93] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:06.033060   78865 pod_ready.go:82] duration metric: took 3.005501862s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:06.033068   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.039035   78865 pod_ready.go:93] pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.039059   78865 pod_ready.go:82] duration metric: took 1.005984859s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.039068   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p7zvh" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.043096   78865 pod_ready.go:93] pod "kube-proxy-p7zvh" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.043116   78865 pod_ready.go:82] duration metric: took 4.042896ms for pod "kube-proxy-p7zvh" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.043125   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.046934   78865 pod_ready.go:93] pod "kube-scheduler-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.046957   78865 pod_ready.go:82] duration metric: took 3.826283ms for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.046966   78865 pod_ready.go:39] duration metric: took 5.048560252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:41:07.046983   78865 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:41:07.047036   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:41:07.062234   78865 api_server.go:72] duration metric: took 5.315200823s to wait for apiserver process to appear ...
	I0829 19:41:07.062256   78865 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:41:07.062277   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:41:07.068022   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0829 19:41:07.069170   78865 api_server.go:141] control plane version: v1.31.0
	I0829 19:41:07.069190   78865 api_server.go:131] duration metric: took 6.927858ms to wait for apiserver health ...
	I0829 19:41:07.069198   78865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:41:07.075909   78865 system_pods.go:59] 9 kube-system pods found
	I0829 19:41:07.075932   78865 system_pods.go:61] "coredns-6f6b679f8f-wr7bq" [ac054ab5-3a0e-433e-add6-5817ce6f1c27] Running
	I0829 19:41:07.075939   78865 system_pods.go:61] "coredns-6f6b679f8f-xbfb6" [a94d281f-1fdb-4e33-a060-17cd5981462c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:41:07.075944   78865 system_pods.go:61] "etcd-no-preload-690795" [a13c0d5b-fb99-4c57-8401-b0580a0e97a7] Running
	I0829 19:41:07.075949   78865 system_pods.go:61] "kube-apiserver-no-preload-690795" [1ec66a25-5b6d-4b81-8c44-b4dffad1ec12] Running
	I0829 19:41:07.075953   78865 system_pods.go:61] "kube-controller-manager-no-preload-690795" [d670e6eb-006f-4fce-a72f-f266fda72ccc] Running
	I0829 19:41:07.075956   78865 system_pods.go:61] "kube-proxy-p7zvh" [14f4576d-3d3e-4848-9350-3348293318aa] Running
	I0829 19:41:07.075960   78865 system_pods.go:61] "kube-scheduler-no-preload-690795" [1b4dfa7f-b043-4a38-a250-2e51eabf1b33] Running
	I0829 19:41:07.075964   78865 system_pods.go:61] "metrics-server-6867b74b74-shs88" [cd53f408-7f8a-40ae-93f3-7a00c8ae6646] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:41:07.075968   78865 system_pods.go:61] "storage-provisioner" [df10c563-06d8-48f8-a6e4-35837195a25d] Running
	I0829 19:41:07.075975   78865 system_pods.go:74] duration metric: took 6.771333ms to wait for pod list to return data ...
	I0829 19:41:07.075985   78865 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:41:07.079235   78865 default_sa.go:45] found service account: "default"
	I0829 19:41:07.079255   78865 default_sa.go:55] duration metric: took 3.264804ms for default service account to be created ...
	I0829 19:41:07.079263   78865 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:41:07.083981   78865 system_pods.go:86] 9 kube-system pods found
	I0829 19:41:07.084006   78865 system_pods.go:89] "coredns-6f6b679f8f-wr7bq" [ac054ab5-3a0e-433e-add6-5817ce6f1c27] Running
	I0829 19:41:07.084014   78865 system_pods.go:89] "coredns-6f6b679f8f-xbfb6" [a94d281f-1fdb-4e33-a060-17cd5981462c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:41:07.084019   78865 system_pods.go:89] "etcd-no-preload-690795" [a13c0d5b-fb99-4c57-8401-b0580a0e97a7] Running
	I0829 19:41:07.084025   78865 system_pods.go:89] "kube-apiserver-no-preload-690795" [1ec66a25-5b6d-4b81-8c44-b4dffad1ec12] Running
	I0829 19:41:07.084029   78865 system_pods.go:89] "kube-controller-manager-no-preload-690795" [d670e6eb-006f-4fce-a72f-f266fda72ccc] Running
	I0829 19:41:07.084032   78865 system_pods.go:89] "kube-proxy-p7zvh" [14f4576d-3d3e-4848-9350-3348293318aa] Running
	I0829 19:41:07.084037   78865 system_pods.go:89] "kube-scheduler-no-preload-690795" [1b4dfa7f-b043-4a38-a250-2e51eabf1b33] Running
	I0829 19:41:07.084042   78865 system_pods.go:89] "metrics-server-6867b74b74-shs88" [cd53f408-7f8a-40ae-93f3-7a00c8ae6646] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:41:07.084045   78865 system_pods.go:89] "storage-provisioner" [df10c563-06d8-48f8-a6e4-35837195a25d] Running
	I0829 19:41:07.084052   78865 system_pods.go:126] duration metric: took 4.784448ms to wait for k8s-apps to be running ...
	I0829 19:41:07.084062   78865 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:41:07.084104   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:07.098513   78865 system_svc.go:56] duration metric: took 14.440998ms WaitForService to wait for kubelet
	I0829 19:41:07.098551   78865 kubeadm.go:582] duration metric: took 5.351518255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:41:07.098574   78865 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:41:07.231160   78865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:41:07.231189   78865 node_conditions.go:123] node cpu capacity is 2
	I0829 19:41:07.231200   78865 node_conditions.go:105] duration metric: took 132.62068ms to run NodePressure ...
	I0829 19:41:07.231209   78865 start.go:241] waiting for startup goroutines ...
	I0829 19:41:07.231216   78865 start.go:246] waiting for cluster config update ...
	I0829 19:41:07.231225   78865 start.go:255] writing updated cluster config ...
	I0829 19:41:07.231503   78865 ssh_runner.go:195] Run: rm -f paused
	I0829 19:41:07.283204   78865 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:41:07.284751   78865 out.go:177] * Done! kubectl is now configured to use "no-preload-690795" cluster and "default" namespace by default
	I0829 19:41:40.770350   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:40.770652   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:40.770684   79869 kubeadm.go:310] 
	I0829 19:41:40.770740   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:41:40.770802   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:41:40.770818   79869 kubeadm.go:310] 
	I0829 19:41:40.770862   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:41:40.770917   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:41:40.771047   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:41:40.771057   79869 kubeadm.go:310] 
	I0829 19:41:40.771202   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:41:40.771254   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:41:40.771309   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:41:40.771320   79869 kubeadm.go:310] 
	I0829 19:41:40.771447   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:41:40.771565   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:41:40.771576   79869 kubeadm.go:310] 
	I0829 19:41:40.771675   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:41:40.771776   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:41:40.771900   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:41:40.771997   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:41:40.772010   79869 kubeadm.go:310] 
	I0829 19:41:40.772984   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:41:40.773093   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:41:40.773213   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 19:41:40.773353   79869 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 19:41:40.773398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:41:41.224263   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:41.239310   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:41:41.249121   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:41:41.249142   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:41:41.249195   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:41:41.258534   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:41:41.258591   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:41:41.267814   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:41:41.276813   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:41:41.276871   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:41:41.286937   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.296364   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:41:41.296435   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.306574   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:41:41.315824   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:41:41.315899   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:41:41.325290   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:41:41.389915   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:41:41.390071   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:41:41.529956   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:41:41.530108   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:41:41.530226   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:41:41.709310   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:41:41.711945   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:41:41.712051   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:41:41.712127   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:41:41.712225   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:41:41.712308   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:41:41.712402   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:41:41.712466   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:41:41.712551   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:41:41.712622   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:41:41.712727   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:41:41.712831   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:41:41.712865   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:41:41.712912   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:41:41.790778   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:41:41.993240   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:41:42.180389   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:41:42.248561   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:41:42.272297   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:41:42.273147   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:41:42.273249   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:41:42.421783   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:41:42.424669   79869 out.go:235]   - Booting up control plane ...
	I0829 19:41:42.424781   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:41:42.434145   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:41:42.437026   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:41:42.437823   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:41:42.441047   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:42:22.439545   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:42:22.439898   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:22.440093   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:27.439985   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:27.440226   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:37.440067   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:37.440333   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:57.439710   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:57.439891   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.439862   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:43:37.440057   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.440081   79869 kubeadm.go:310] 
	I0829 19:43:37.440118   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:43:37.440173   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:43:37.440181   79869 kubeadm.go:310] 
	I0829 19:43:37.440213   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:43:37.440265   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:43:37.440376   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:43:37.440384   79869 kubeadm.go:310] 
	I0829 19:43:37.440503   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:43:37.440551   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:43:37.440605   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:43:37.440618   79869 kubeadm.go:310] 
	I0829 19:43:37.440763   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:43:37.440893   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:43:37.440904   79869 kubeadm.go:310] 
	I0829 19:43:37.441013   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:43:37.441146   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:43:37.441255   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:43:37.441367   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:43:37.441380   79869 kubeadm.go:310] 
	I0829 19:43:37.441848   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:43:37.441958   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:43:37.442039   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 19:43:37.442126   79869 kubeadm.go:394] duration metric: took 8m1.388269811s to StartCluster
	I0829 19:43:37.442174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:43:37.442230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:43:37.483512   79869 cri.go:89] found id: ""
	I0829 19:43:37.483544   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.483554   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:43:37.483560   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:43:37.483617   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:43:37.518325   79869 cri.go:89] found id: ""
	I0829 19:43:37.518353   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.518361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:43:37.518368   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:43:37.518426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:43:37.554541   79869 cri.go:89] found id: ""
	I0829 19:43:37.554563   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.554574   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:43:37.554582   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:43:37.554650   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:43:37.589041   79869 cri.go:89] found id: ""
	I0829 19:43:37.589069   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.589076   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:43:37.589083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:43:37.589132   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:43:37.624451   79869 cri.go:89] found id: ""
	I0829 19:43:37.624479   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.624491   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:43:37.624499   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:43:37.624554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:43:37.660162   79869 cri.go:89] found id: ""
	I0829 19:43:37.660186   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.660193   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:43:37.660199   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:43:37.660249   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:43:37.696806   79869 cri.go:89] found id: ""
	I0829 19:43:37.696836   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.696844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:43:37.696850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:43:37.696898   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:43:37.732828   79869 cri.go:89] found id: ""
	I0829 19:43:37.732851   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.732860   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:43:37.732871   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:43:37.732887   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:43:37.772219   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:43:37.772247   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:43:37.823967   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:43:37.824003   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:43:37.838884   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:43:37.838906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:43:37.915184   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:43:37.915206   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:43:37.915222   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0829 19:43:38.020759   79869 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 19:43:38.020827   79869 out.go:270] * 
	W0829 19:43:38.020882   79869 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.020897   79869 out.go:270] * 
	W0829 19:43:38.021777   79869 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:43:38.024855   79869 out.go:201] 
	W0829 19:43:38.025860   79869 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.025905   79869 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 19:43:38.025936   79869 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 19:43:38.027175   79869 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.246053901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961009246030315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d23e9362-5869-4362-b663-c7104ff3fe1e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.246611135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23985e4c-9523-4b6e-aecb-2c12797a0440 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.246660237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23985e4c-9523-4b6e-aecb-2c12797a0440 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.246952840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7153d12c98b69899781efbf229ff785521f418d3f4f6373cdd42e7b17d8cab3,PodSandboxId:869495f955c23c928b1a5b85448e5b02ef53b037f90d2f16093fe38b46eac4ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960463843338881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df10c563-06d8-48f8-a6e4-35837195a25d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c065e8e725e4c7b37b01611fadc6a952adf6b719f61020ed65d7a79d37b36c,PodSandboxId:d73820d8e93438f7c6b9ace66232f78f0407facf3747dcf6c32c423764c01124,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463443220855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xbfb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94d281f-1fdb-4e33-a060-17cd5981462c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2757f3d6106ea797679c630cda14c06892595552124c8f6363208e1470fe2a6d,PodSandboxId:f81e54f62ae9d4ea268da933c7437d7cc36ba2397eb7dfbeeb4000da1fa8face,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463343529596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wr7bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
054ab5-3a0e-433e-add6-5817ce6f1c27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379ceac562879d338e481d72acdd211b0b77321d4436c0ba341c0bd027ed7655,PodSandboxId:ecbad8a0de810d8e9ea61613f3f1ce982d2f315ef81ea908faf0f099297f5c99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724960462474443560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p7zvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14f4576d-3d3e-4848-9350-3348293318aa,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d4eab307b8f18b7f92c586a0902bd87842177a6290ff676b12de0255d342067,PodSandboxId:0ed5b1e684ff805d190bf27319d4e24203a1f9a62bd3aa19e4a4781e697c7d17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960451681552476,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf0ae6e4d317bbcee09566ba701bf597691d5ed553759a0a22fd8c66999ab99,PodSandboxId:25bff46e36d60f08eece0830369452f8dc9fa2a8b4bb363d44bdd25d944f8a68,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960451656824746,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb18701278f40660cece17f9f33a9849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc5a190d459d28edf348021984dc04796f426b8b304e5e640402838981e7264,PodSandboxId:52f0b1fe265e2cf716fdb7dcc0146a85b1f50e0cc1d61c67696325bc940ed54b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960451601993886,Labels:map[string]string{io.kubernetes.container.name: kube-schedu
ler,io.kubernetes.pod.name: kube-scheduler-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae0a7e63f1193ff8ddd81724cfe2882,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c721a7921b378e4504e2d4610f4b2df8074b382778a4718ba3b2b2ddd95f930,PodSandboxId:a32c159a1743213c74d746926e8a872ff7f179a5409dc7f35b30c17033897679,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960451563529690,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0d41bb860df3e9b29440eb119ab23f7,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ef809174f4a96fac0d3e8a1adb78b736dbf31c58ae6a58d3bb4025f49f9dff,PodSandboxId:fff2e8c50b000ea95ab09d804b9ea35aac68cfa27db0a9246c2aa66265b19c98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960166797120635,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23985e4c-9523-4b6e-aecb-2c12797a0440 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.282743322Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30d41b19-1635-4f52-be23-17334bd36179 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.282834594Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30d41b19-1635-4f52-be23-17334bd36179 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.283993631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=687f9f43-759a-4908-ae80-f86f84d7dd17 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.284319586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961009284298252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=687f9f43-759a-4908-ae80-f86f84d7dd17 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.284844078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7175c0a9-a5fe-4e45-a577-2a5dec4f76b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.284907105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7175c0a9-a5fe-4e45-a577-2a5dec4f76b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.285101836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7153d12c98b69899781efbf229ff785521f418d3f4f6373cdd42e7b17d8cab3,PodSandboxId:869495f955c23c928b1a5b85448e5b02ef53b037f90d2f16093fe38b46eac4ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960463843338881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df10c563-06d8-48f8-a6e4-35837195a25d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c065e8e725e4c7b37b01611fadc6a952adf6b719f61020ed65d7a79d37b36c,PodSandboxId:d73820d8e93438f7c6b9ace66232f78f0407facf3747dcf6c32c423764c01124,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463443220855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xbfb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94d281f-1fdb-4e33-a060-17cd5981462c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2757f3d6106ea797679c630cda14c06892595552124c8f6363208e1470fe2a6d,PodSandboxId:f81e54f62ae9d4ea268da933c7437d7cc36ba2397eb7dfbeeb4000da1fa8face,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463343529596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wr7bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
054ab5-3a0e-433e-add6-5817ce6f1c27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379ceac562879d338e481d72acdd211b0b77321d4436c0ba341c0bd027ed7655,PodSandboxId:ecbad8a0de810d8e9ea61613f3f1ce982d2f315ef81ea908faf0f099297f5c99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724960462474443560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p7zvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14f4576d-3d3e-4848-9350-3348293318aa,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d4eab307b8f18b7f92c586a0902bd87842177a6290ff676b12de0255d342067,PodSandboxId:0ed5b1e684ff805d190bf27319d4e24203a1f9a62bd3aa19e4a4781e697c7d17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960451681552476,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf0ae6e4d317bbcee09566ba701bf597691d5ed553759a0a22fd8c66999ab99,PodSandboxId:25bff46e36d60f08eece0830369452f8dc9fa2a8b4bb363d44bdd25d944f8a68,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960451656824746,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb18701278f40660cece17f9f33a9849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc5a190d459d28edf348021984dc04796f426b8b304e5e640402838981e7264,PodSandboxId:52f0b1fe265e2cf716fdb7dcc0146a85b1f50e0cc1d61c67696325bc940ed54b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960451601993886,Labels:map[string]string{io.kubernetes.container.name: kube-schedu
ler,io.kubernetes.pod.name: kube-scheduler-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae0a7e63f1193ff8ddd81724cfe2882,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c721a7921b378e4504e2d4610f4b2df8074b382778a4718ba3b2b2ddd95f930,PodSandboxId:a32c159a1743213c74d746926e8a872ff7f179a5409dc7f35b30c17033897679,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960451563529690,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0d41bb860df3e9b29440eb119ab23f7,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ef809174f4a96fac0d3e8a1adb78b736dbf31c58ae6a58d3bb4025f49f9dff,PodSandboxId:fff2e8c50b000ea95ab09d804b9ea35aac68cfa27db0a9246c2aa66265b19c98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960166797120635,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7175c0a9-a5fe-4e45-a577-2a5dec4f76b1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.320348703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb1f9e7f-6d45-4360-affa-83db33e9fba6 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.320432904Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb1f9e7f-6d45-4360-affa-83db33e9fba6 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.321994150Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49384d7b-f103-442d-804c-a611baa04423 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.322342562Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961009322321015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49384d7b-f103-442d-804c-a611baa04423 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.323068913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1691c4e0-fd4b-457b-95be-5a61a0b61904 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.323134331Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1691c4e0-fd4b-457b-95be-5a61a0b61904 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.323330127Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7153d12c98b69899781efbf229ff785521f418d3f4f6373cdd42e7b17d8cab3,PodSandboxId:869495f955c23c928b1a5b85448e5b02ef53b037f90d2f16093fe38b46eac4ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960463843338881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df10c563-06d8-48f8-a6e4-35837195a25d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c065e8e725e4c7b37b01611fadc6a952adf6b719f61020ed65d7a79d37b36c,PodSandboxId:d73820d8e93438f7c6b9ace66232f78f0407facf3747dcf6c32c423764c01124,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463443220855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xbfb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94d281f-1fdb-4e33-a060-17cd5981462c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2757f3d6106ea797679c630cda14c06892595552124c8f6363208e1470fe2a6d,PodSandboxId:f81e54f62ae9d4ea268da933c7437d7cc36ba2397eb7dfbeeb4000da1fa8face,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463343529596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wr7bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
054ab5-3a0e-433e-add6-5817ce6f1c27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379ceac562879d338e481d72acdd211b0b77321d4436c0ba341c0bd027ed7655,PodSandboxId:ecbad8a0de810d8e9ea61613f3f1ce982d2f315ef81ea908faf0f099297f5c99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724960462474443560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p7zvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14f4576d-3d3e-4848-9350-3348293318aa,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d4eab307b8f18b7f92c586a0902bd87842177a6290ff676b12de0255d342067,PodSandboxId:0ed5b1e684ff805d190bf27319d4e24203a1f9a62bd3aa19e4a4781e697c7d17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960451681552476,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf0ae6e4d317bbcee09566ba701bf597691d5ed553759a0a22fd8c66999ab99,PodSandboxId:25bff46e36d60f08eece0830369452f8dc9fa2a8b4bb363d44bdd25d944f8a68,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960451656824746,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb18701278f40660cece17f9f33a9849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc5a190d459d28edf348021984dc04796f426b8b304e5e640402838981e7264,PodSandboxId:52f0b1fe265e2cf716fdb7dcc0146a85b1f50e0cc1d61c67696325bc940ed54b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960451601993886,Labels:map[string]string{io.kubernetes.container.name: kube-schedu
ler,io.kubernetes.pod.name: kube-scheduler-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae0a7e63f1193ff8ddd81724cfe2882,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c721a7921b378e4504e2d4610f4b2df8074b382778a4718ba3b2b2ddd95f930,PodSandboxId:a32c159a1743213c74d746926e8a872ff7f179a5409dc7f35b30c17033897679,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960451563529690,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0d41bb860df3e9b29440eb119ab23f7,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ef809174f4a96fac0d3e8a1adb78b736dbf31c58ae6a58d3bb4025f49f9dff,PodSandboxId:fff2e8c50b000ea95ab09d804b9ea35aac68cfa27db0a9246c2aa66265b19c98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960166797120635,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1691c4e0-fd4b-457b-95be-5a61a0b61904 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.355127181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc56c30a-f1a7-44af-823a-8543a939c139 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.355214736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc56c30a-f1a7-44af-823a-8543a939c139 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.356096395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd3fdecf-bdc0-402b-a1a9-92b1f6e4f338 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.356416257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961009356394092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd3fdecf-bdc0-402b-a1a9-92b1f6e4f338 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.357071165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52b3bedc-fe33-4965-9296-0ec4705cdebf name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.357121610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52b3bedc-fe33-4965-9296-0ec4705cdebf name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:50:09 no-preload-690795 crio[700]: time="2024-08-29 19:50:09.357312674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7153d12c98b69899781efbf229ff785521f418d3f4f6373cdd42e7b17d8cab3,PodSandboxId:869495f955c23c928b1a5b85448e5b02ef53b037f90d2f16093fe38b46eac4ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960463843338881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df10c563-06d8-48f8-a6e4-35837195a25d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c065e8e725e4c7b37b01611fadc6a952adf6b719f61020ed65d7a79d37b36c,PodSandboxId:d73820d8e93438f7c6b9ace66232f78f0407facf3747dcf6c32c423764c01124,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463443220855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xbfb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94d281f-1fdb-4e33-a060-17cd5981462c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2757f3d6106ea797679c630cda14c06892595552124c8f6363208e1470fe2a6d,PodSandboxId:f81e54f62ae9d4ea268da933c7437d7cc36ba2397eb7dfbeeb4000da1fa8face,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463343529596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wr7bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
054ab5-3a0e-433e-add6-5817ce6f1c27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379ceac562879d338e481d72acdd211b0b77321d4436c0ba341c0bd027ed7655,PodSandboxId:ecbad8a0de810d8e9ea61613f3f1ce982d2f315ef81ea908faf0f099297f5c99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724960462474443560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p7zvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14f4576d-3d3e-4848-9350-3348293318aa,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d4eab307b8f18b7f92c586a0902bd87842177a6290ff676b12de0255d342067,PodSandboxId:0ed5b1e684ff805d190bf27319d4e24203a1f9a62bd3aa19e4a4781e697c7d17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960451681552476,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf0ae6e4d317bbcee09566ba701bf597691d5ed553759a0a22fd8c66999ab99,PodSandboxId:25bff46e36d60f08eece0830369452f8dc9fa2a8b4bb363d44bdd25d944f8a68,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960451656824746,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb18701278f40660cece17f9f33a9849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc5a190d459d28edf348021984dc04796f426b8b304e5e640402838981e7264,PodSandboxId:52f0b1fe265e2cf716fdb7dcc0146a85b1f50e0cc1d61c67696325bc940ed54b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960451601993886,Labels:map[string]string{io.kubernetes.container.name: kube-schedu
ler,io.kubernetes.pod.name: kube-scheduler-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae0a7e63f1193ff8ddd81724cfe2882,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c721a7921b378e4504e2d4610f4b2df8074b382778a4718ba3b2b2ddd95f930,PodSandboxId:a32c159a1743213c74d746926e8a872ff7f179a5409dc7f35b30c17033897679,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960451563529690,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0d41bb860df3e9b29440eb119ab23f7,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ef809174f4a96fac0d3e8a1adb78b736dbf31c58ae6a58d3bb4025f49f9dff,PodSandboxId:fff2e8c50b000ea95ab09d804b9ea35aac68cfa27db0a9246c2aa66265b19c98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960166797120635,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52b3bedc-fe33-4965-9296-0ec4705cdebf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a7153d12c98b6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   869495f955c23       storage-provisioner
	89c065e8e725e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   d73820d8e9343       coredns-6f6b679f8f-xbfb6
	2757f3d6106ea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   f81e54f62ae9d       coredns-6f6b679f8f-wr7bq
	379ceac562879       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   ecbad8a0de810       kube-proxy-p7zvh
	1d4eab307b8f1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   0ed5b1e684ff8       kube-apiserver-no-preload-690795
	dbf0ae6e4d317       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   25bff46e36d60       etcd-no-preload-690795
	1fc5a190d459d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   52f0b1fe265e2       kube-scheduler-no-preload-690795
	3c721a7921b37       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   a32c159a17432       kube-controller-manager-no-preload-690795
	e3ef809174f4a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   fff2e8c50b000       kube-apiserver-no-preload-690795
	
	
	==> coredns [2757f3d6106ea797679c630cda14c06892595552124c8f6363208e1470fe2a6d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [89c065e8e725e4c7b37b01611fadc6a952adf6b719f61020ed65d7a79d37b36c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-690795
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-690795
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=no-preload-690795
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_40_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:40:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-690795
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:50:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:46:13 +0000   Thu, 29 Aug 2024 19:40:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:46:13 +0000   Thu, 29 Aug 2024 19:40:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:46:13 +0000   Thu, 29 Aug 2024 19:40:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:46:13 +0000   Thu, 29 Aug 2024 19:40:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    no-preload-690795
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 08c9c91c767f460fabd230675217c2db
	  System UUID:                08c9c91c-767f-460f-abd2-30675217c2db
	  Boot ID:                    d952d251-7c4e-41f9-b9b6-e5d5f68dd90d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-wr7bq                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m7s
	  kube-system                 coredns-6f6b679f8f-xbfb6                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m7s
	  kube-system                 etcd-no-preload-690795                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m13s
	  kube-system                 kube-apiserver-no-preload-690795             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-controller-manager-no-preload-690795    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-proxy-p7zvh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 kube-scheduler-no-preload-690795             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 metrics-server-6867b74b74-shs88              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m6s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m18s (x8 over 9m18s)  kubelet          Node no-preload-690795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s (x8 over 9m18s)  kubelet          Node no-preload-690795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s (x7 over 9m18s)  kubelet          Node no-preload-690795 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m13s (x2 over 9m13s)  kubelet          Node no-preload-690795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m13s (x2 over 9m13s)  kubelet          Node no-preload-690795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m13s (x2 over 9m13s)  kubelet          Node no-preload-690795 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m9s                   node-controller  Node no-preload-690795 event: Registered Node no-preload-690795 in Controller
	
	
	==> dmesg <==
	[  +0.040737] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.049458] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.923752] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.535091] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.717454] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.069561] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068501] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.177179] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.153110] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.265381] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[Aug29 19:36] systemd-fstab-generator[1276]: Ignoring "noauto" option for root device
	[  +0.061497] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.808195] systemd-fstab-generator[1399]: Ignoring "noauto" option for root device
	[  +3.641246] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.179925] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.136970] kauditd_printk_skb: 26 callbacks suppressed
	[Aug29 19:40] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.291139] systemd-fstab-generator[3053]: Ignoring "noauto" option for root device
	[  +4.590106] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.472947] systemd-fstab-generator[3373]: Ignoring "noauto" option for root device
	[Aug29 19:41] systemd-fstab-generator[3497]: Ignoring "noauto" option for root device
	[  +0.091905] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.804889] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [dbf0ae6e4d317bbcee09566ba701bf597691d5ed553759a0a22fd8c66999ab99] <==
	{"level":"info","ts":"2024-08-29T19:40:51.946103Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.76:2380"}
	{"level":"info","ts":"2024-08-29T19:40:51.953072Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T19:40:51.953029Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4f06aa0eaa8889d9","initial-advertise-peer-urls":["https://192.168.39.76:2380"],"listen-peer-urls":["https://192.168.39.76:2380"],"advertise-client-urls":["https://192.168.39.76:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.76:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T19:40:51.953366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 switched to configuration voters=(5694425758823909849)"}
	{"level":"info","ts":"2024-08-29T19:40:51.953472Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1be8679029844888","local-member-id":"4f06aa0eaa8889d9","added-peer-id":"4f06aa0eaa8889d9","added-peer-peer-urls":["https://192.168.39.76:2380"]}
	{"level":"info","ts":"2024-08-29T19:40:52.903579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-29T19:40:52.903730Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-29T19:40:52.903768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 received MsgPreVoteResp from 4f06aa0eaa8889d9 at term 1"}
	{"level":"info","ts":"2024-08-29T19:40:52.903805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 became candidate at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:52.903849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 received MsgVoteResp from 4f06aa0eaa8889d9 at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:52.903904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 became leader at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:52.903939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4f06aa0eaa8889d9 elected leader 4f06aa0eaa8889d9 at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:52.905822Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:52.906937Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1be8679029844888","local-member-id":"4f06aa0eaa8889d9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:52.907030Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:52.907078Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:52.907174Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4f06aa0eaa8889d9","local-member-attributes":"{Name:no-preload-690795 ClientURLs:[https://192.168.39.76:2379]}","request-path":"/0/members/4f06aa0eaa8889d9/attributes","cluster-id":"1be8679029844888","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:40:52.907225Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:40:52.907735Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:40:52.909294Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:40:52.910091Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T19:40:52.910254Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:40:52.910286Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T19:40:52.908668Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:40:52.911487Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.76:2379"}
	
	
	==> kernel <==
	 19:50:09 up 14 min,  0 users,  load average: 0.13, 0.21, 0.17
	Linux no-preload-690795 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1d4eab307b8f18b7f92c586a0902bd87842177a6290ff676b12de0255d342067] <==
	E0829 19:45:55.225782       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0829 19:45:55.225837       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 19:45:55.226948       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:45:55.226969       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:46:55.227859       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:46:55.228129       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 19:46:55.228328       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:46:55.228374       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 19:46:55.229543       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:46:55.229702       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:48:55.230806       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:48:55.230948       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 19:48:55.230801       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:48:55.230995       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 19:48:55.232272       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:48:55.232343       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [e3ef809174f4a96fac0d3e8a1adb78b736dbf31c58ae6a58d3bb4025f49f9dff] <==
	W0829 19:40:46.642754       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.678956       1 logging.go:55] [core] [Channel #15 SubChannel #17]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.723074       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.765069       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.820009       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.859222       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.866968       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.908462       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.916088       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.944918       1 logging.go:55] [core] [Channel #8 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.979034       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.026028       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.027348       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.095314       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.096729       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.166798       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.250565       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.272282       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.419007       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.445651       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.522368       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.644472       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.781414       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.788080       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.803535       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [3c721a7921b378e4504e2d4610f4b2df8074b382778a4718ba3b2b2ddd95f930] <==
	E0829 19:45:01.202431       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:45:01.653014       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:45:31.209970       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:45:31.665829       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:46:01.217494       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:46:01.675793       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:46:13.128406       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-690795"
	E0829 19:46:31.224002       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:46:31.685996       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:47:01.230659       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:47:01.693501       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:47:05.769739       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="298.106µs"
	I0829 19:47:20.768288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="217.391µs"
	E0829 19:47:31.236794       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:47:31.709557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:48:01.244470       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:48:01.720196       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:48:31.251208       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:48:31.728045       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:49:01.258615       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:49:01.736272       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:49:31.266316       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:49:31.747037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:50:01.273485       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:50:01.754737       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [379ceac562879d338e481d72acdd211b0b77321d4436c0ba341c0bd027ed7655] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:41:02.946558       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:41:02.957958       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.76"]
	E0829 19:41:02.958024       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:41:03.060123       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:41:03.060171       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:41:03.060201       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:41:03.063327       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:41:03.063617       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:41:03.063631       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:41:03.066996       1 config.go:197] "Starting service config controller"
	I0829 19:41:03.067024       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:41:03.067055       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:41:03.067062       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:41:03.067744       1 config.go:326] "Starting node config controller"
	I0829 19:41:03.067753       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:41:03.169421       1 shared_informer.go:320] Caches are synced for node config
	I0829 19:41:03.169479       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:41:03.169507       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1fc5a190d459d28edf348021984dc04796f426b8b304e5e640402838981e7264] <==
	W0829 19:40:54.244741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 19:40:54.244769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:54.244823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 19:40:54.244849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:54.244903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:54.244926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.118518       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:55.118573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.170012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 19:40:55.170129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.273646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 19:40:55.273846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.273852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:55.274012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.344649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 19:40:55.344732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.367721       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 19:40:55.367765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.411433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 19:40:55.411481       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.433878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:55.433925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.714630       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 19:40:55.714857       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 19:40:57.827881       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:48:56 no-preload-690795 kubelet[3380]: E0829 19:48:56.913908    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960936913519871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:06 no-preload-690795 kubelet[3380]: E0829 19:49:06.915618    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960946915173984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:06 no-preload-690795 kubelet[3380]: E0829 19:49:06.915647    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960946915173984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:08 no-preload-690795 kubelet[3380]: E0829 19:49:08.752782    3380 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shs88" podUID="cd53f408-7f8a-40ae-93f3-7a00c8ae6646"
	Aug 29 19:49:16 no-preload-690795 kubelet[3380]: E0829 19:49:16.920037    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960956918543788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:16 no-preload-690795 kubelet[3380]: E0829 19:49:16.921747    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960956918543788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:22 no-preload-690795 kubelet[3380]: E0829 19:49:22.753031    3380 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shs88" podUID="cd53f408-7f8a-40ae-93f3-7a00c8ae6646"
	Aug 29 19:49:26 no-preload-690795 kubelet[3380]: E0829 19:49:26.923865    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960966923402150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:26 no-preload-690795 kubelet[3380]: E0829 19:49:26.923905    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960966923402150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:34 no-preload-690795 kubelet[3380]: E0829 19:49:34.752796    3380 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shs88" podUID="cd53f408-7f8a-40ae-93f3-7a00c8ae6646"
	Aug 29 19:49:36 no-preload-690795 kubelet[3380]: E0829 19:49:36.926250    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960976925804937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:36 no-preload-690795 kubelet[3380]: E0829 19:49:36.926567    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960976925804937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:46 no-preload-690795 kubelet[3380]: E0829 19:49:46.928419    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960986927761857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:46 no-preload-690795 kubelet[3380]: E0829 19:49:46.928840    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960986927761857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:48 no-preload-690795 kubelet[3380]: E0829 19:49:48.755005    3380 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shs88" podUID="cd53f408-7f8a-40ae-93f3-7a00c8ae6646"
	Aug 29 19:49:56 no-preload-690795 kubelet[3380]: E0829 19:49:56.795257    3380 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:49:56 no-preload-690795 kubelet[3380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:49:56 no-preload-690795 kubelet[3380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:49:56 no-preload-690795 kubelet[3380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:49:56 no-preload-690795 kubelet[3380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:49:56 no-preload-690795 kubelet[3380]: E0829 19:49:56.931332    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960996930809694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:49:56 no-preload-690795 kubelet[3380]: E0829 19:49:56.931387    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724960996930809694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:50:02 no-preload-690795 kubelet[3380]: E0829 19:50:02.753431    3380 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shs88" podUID="cd53f408-7f8a-40ae-93f3-7a00c8ae6646"
	Aug 29 19:50:06 no-preload-690795 kubelet[3380]: E0829 19:50:06.933503    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961006933093956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:50:06 no-preload-690795 kubelet[3380]: E0829 19:50:06.933554    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961006933093956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [a7153d12c98b69899781efbf229ff785521f418d3f4f6373cdd42e7b17d8cab3] <==
	I0829 19:41:04.013795       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 19:41:04.028976       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 19:41:04.029139       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 19:41:04.037465       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 19:41:04.037608       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-690795_162ddbc5-60b9-43cc-a598-796c35f93279!
	I0829 19:41:04.042143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2bda510d-c5dd-4aa1-946c-691215f2b320", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-690795_162ddbc5-60b9-43cc-a598-796c35f93279 became leader
	I0829 19:41:04.138435       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-690795_162ddbc5-60b9-43cc-a598-796c35f93279!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-690795 -n no-preload-690795
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-690795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-shs88
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-690795 describe pod metrics-server-6867b74b74-shs88
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-690795 describe pod metrics-server-6867b74b74-shs88: exit status 1 (62.795179ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-shs88" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-690795 describe pod metrics-server-6867b74b74-shs88: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:43:50.041343   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:44:27.014073   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:44:32.802259   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:44:49.632607   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:44:57.000837   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:45:12.474639   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:45:13.105764   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:45:49.951134   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:45:55.865629   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:46:20.064670   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:46:29.779026   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:46:35.539228   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:46:41.163690   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:47:13.013197   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:48:03.950465   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:04.229200   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:48:26.706448   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:48:50.041229   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:49:32.802279   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:49:49.632678   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:49:57.000990   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:50:12.474486   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:50:49.951035   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:51:41.163784   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467349 -n old-k8s-version-467349
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 2 (222.382956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-467349" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 2 (214.37832ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-467349 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-467349 logs -n 25: (1.563492943s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-633326 sudo cat                              | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo find                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo crio                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-633326                                       | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	| delete  | -p                                                     | disable-driver-mounts-831934 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | disable-driver-mounts-831934                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:28 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-690795             | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-920571            | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-672127  | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC | 29 Aug 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC |                     |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-690795                  | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC | 29 Aug 24 19:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-920571                 | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC | 29 Aug 24 19:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467349        | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-672127       | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:40 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467349             | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:31:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:31:58.737382   79869 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:31:58.737475   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737483   79869 out.go:358] Setting ErrFile to fd 2...
	I0829 19:31:58.737486   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737664   79869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:31:58.738216   79869 out.go:352] Setting JSON to false
	I0829 19:31:58.739096   79869 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8066,"bootTime":1724951853,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:31:58.739164   79869 start.go:139] virtualization: kvm guest
	I0829 19:31:58.741047   79869 out.go:177] * [old-k8s-version-467349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:31:58.742202   79869 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:31:58.742202   79869 notify.go:220] Checking for updates...
	I0829 19:31:58.744035   79869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:31:58.745212   79869 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:31:58.746330   79869 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:31:58.747599   79869 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:31:58.748625   79869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:31:58.749897   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:31:58.750353   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.750402   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.765128   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I0829 19:31:58.765502   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.765933   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.765952   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.766302   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.766478   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.768195   79869 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0829 19:31:58.769230   79869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:31:58.769562   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.769599   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.783969   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42955
	I0829 19:31:58.784329   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.784794   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.784809   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.785130   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.785335   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.821467   79869 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:31:58.822695   79869 start.go:297] selected driver: kvm2
	I0829 19:31:58.822708   79869 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.822845   79869 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:31:58.823799   79869 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.823887   79869 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:31:58.839098   79869 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:31:58.839445   79869 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:31:58.839504   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:31:58.839519   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:31:58.839556   79869 start.go:340] cluster config:
	{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.839650   79869 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.841263   79869 out.go:177] * Starting "old-k8s-version-467349" primary control-plane node in "old-k8s-version-467349" cluster
	I0829 19:31:58.842265   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:31:58.842301   79869 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:31:58.842310   79869 cache.go:56] Caching tarball of preloaded images
	I0829 19:31:58.842386   79869 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:31:58.842396   79869 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0829 19:31:58.842476   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:31:58.842637   79869 start.go:360] acquireMachinesLock for old-k8s-version-467349: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:32:00.606343   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:03.678411   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:09.758354   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:12.830416   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:18.910387   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:21.982407   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:28.062408   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:31.134407   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:37.214369   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:40.286345   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:46.366360   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:49.438406   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:55.518437   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:58.590377   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:04.670397   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:07.742436   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:13.822348   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:16.894422   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:22.974353   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:26.046337   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:32.126325   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:35.198391   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:41.278353   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:44.350421   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:50.434297   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:53.502296   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:59.582448   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:02.654443   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:08.734358   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:11.806435   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:17.886372   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:20.958351   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:27.038356   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:30.110387   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:33.114600   79073 start.go:364] duration metric: took 4m24.136110592s to acquireMachinesLock for "embed-certs-920571"
	I0829 19:34:33.114658   79073 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:34:33.114666   79073 fix.go:54] fixHost starting: 
	I0829 19:34:33.115014   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:33.115043   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:33.130652   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34641
	I0829 19:34:33.131096   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:33.131536   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:34:33.131555   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:33.131871   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:33.132060   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:33.132217   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:34:33.133784   79073 fix.go:112] recreateIfNeeded on embed-certs-920571: state=Stopped err=<nil>
	I0829 19:34:33.133809   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	W0829 19:34:33.133951   79073 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:34:33.135573   79073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-920571" ...
	I0829 19:34:33.136726   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Start
	I0829 19:34:33.136873   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring networks are active...
	I0829 19:34:33.137613   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring network default is active
	I0829 19:34:33.137909   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring network mk-embed-certs-920571 is active
	I0829 19:34:33.138400   79073 main.go:141] libmachine: (embed-certs-920571) Getting domain xml...
	I0829 19:34:33.139091   79073 main.go:141] libmachine: (embed-certs-920571) Creating domain...
	I0829 19:34:33.112327   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:34:33.112363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:34:33.112705   78865 buildroot.go:166] provisioning hostname "no-preload-690795"
	I0829 19:34:33.112736   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:34:33.112943   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:34:33.114457   78865 machine.go:96] duration metric: took 4m37.430735456s to provisionDockerMachine
	I0829 19:34:33.114505   78865 fix.go:56] duration metric: took 4m37.452542806s for fixHost
	I0829 19:34:33.114516   78865 start.go:83] releasing machines lock for "no-preload-690795", held for 4m37.452590646s
	W0829 19:34:33.114545   78865 start.go:714] error starting host: provision: host is not running
	W0829 19:34:33.114637   78865 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 19:34:33.114647   78865 start.go:729] Will try again in 5 seconds ...
	I0829 19:34:34.366249   79073 main.go:141] libmachine: (embed-certs-920571) Waiting to get IP...
	I0829 19:34:34.367233   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:34.367595   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:34.367671   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:34.367580   80412 retry.go:31] will retry after 294.1031ms: waiting for machine to come up
	I0829 19:34:34.663229   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:34.663677   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:34.663709   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:34.663624   80412 retry.go:31] will retry after 345.352879ms: waiting for machine to come up
	I0829 19:34:35.010102   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.010576   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.010604   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.010527   80412 retry.go:31] will retry after 295.49024ms: waiting for machine to come up
	I0829 19:34:35.308077   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.308580   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.308608   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.308525   80412 retry.go:31] will retry after 575.095429ms: waiting for machine to come up
	I0829 19:34:35.885400   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.885806   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.885835   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.885762   80412 retry.go:31] will retry after 524.463725ms: waiting for machine to come up
	I0829 19:34:36.411496   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:36.411840   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:36.411866   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:36.411802   80412 retry.go:31] will retry after 672.277111ms: waiting for machine to come up
	I0829 19:34:37.085978   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:37.086512   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:37.086537   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:37.086473   80412 retry.go:31] will retry after 1.185875442s: waiting for machine to come up
	I0829 19:34:38.274401   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:38.274881   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:38.274914   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:38.274827   80412 retry.go:31] will retry after 1.426721352s: waiting for machine to come up
	I0829 19:34:38.116486   78865 start.go:360] acquireMachinesLock for no-preload-690795: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:34:39.703333   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:39.703732   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:39.703756   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:39.703691   80412 retry.go:31] will retry after 1.500429564s: waiting for machine to come up
	I0829 19:34:41.206311   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:41.206854   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:41.206882   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:41.206766   80412 retry.go:31] will retry after 2.021866027s: waiting for machine to come up
	I0829 19:34:43.230915   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:43.231329   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:43.231382   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:43.231318   80412 retry.go:31] will retry after 2.415112477s: waiting for machine to come up
	I0829 19:34:45.649815   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:45.650169   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:45.650221   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:45.650140   80412 retry.go:31] will retry after 3.292956483s: waiting for machine to come up
	I0829 19:34:50.094786   79559 start.go:364] duration metric: took 3m31.488453615s to acquireMachinesLock for "default-k8s-diff-port-672127"
	I0829 19:34:50.094847   79559 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:34:50.094857   79559 fix.go:54] fixHost starting: 
	I0829 19:34:50.095330   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:50.095367   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:50.112044   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0829 19:34:50.112510   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:50.112941   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:34:50.112964   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:50.113325   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:50.113522   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:34:50.113663   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:34:50.115335   79559 fix.go:112] recreateIfNeeded on default-k8s-diff-port-672127: state=Stopped err=<nil>
	I0829 19:34:50.115378   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	W0829 19:34:50.115548   79559 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:34:50.117176   79559 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-672127" ...
	I0829 19:34:48.944274   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.944748   79073 main.go:141] libmachine: (embed-certs-920571) Found IP for machine: 192.168.61.243
	I0829 19:34:48.944776   79073 main.go:141] libmachine: (embed-certs-920571) Reserving static IP address...
	I0829 19:34:48.944793   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has current primary IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.945167   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "embed-certs-920571", mac: "52:54:00:35:28:22", ip: "192.168.61.243"} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:48.945195   79073 main.go:141] libmachine: (embed-certs-920571) Reserved static IP address: 192.168.61.243
	I0829 19:34:48.945214   79073 main.go:141] libmachine: (embed-certs-920571) DBG | skip adding static IP to network mk-embed-certs-920571 - found existing host DHCP lease matching {name: "embed-certs-920571", mac: "52:54:00:35:28:22", ip: "192.168.61.243"}
	I0829 19:34:48.945225   79073 main.go:141] libmachine: (embed-certs-920571) Waiting for SSH to be available...
	I0829 19:34:48.945236   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Getting to WaitForSSH function...
	I0829 19:34:48.947646   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.948004   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:48.948034   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.948132   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Using SSH client type: external
	I0829 19:34:48.948162   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa (-rw-------)
	I0829 19:34:48.948280   79073 main.go:141] libmachine: (embed-certs-920571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:34:48.948307   79073 main.go:141] libmachine: (embed-certs-920571) DBG | About to run SSH command:
	I0829 19:34:48.948328   79073 main.go:141] libmachine: (embed-certs-920571) DBG | exit 0
	I0829 19:34:49.073781   79073 main.go:141] libmachine: (embed-certs-920571) DBG | SSH cmd err, output: <nil>: 
	I0829 19:34:49.074184   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetConfigRaw
	I0829 19:34:49.074813   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:49.077014   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.077349   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.077369   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.077550   79073 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/config.json ...
	I0829 19:34:49.077724   79073 machine.go:93] provisionDockerMachine start ...
	I0829 19:34:49.077739   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:49.077936   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.080112   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.080448   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.080472   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.080548   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.080715   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.080853   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.080983   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.081110   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.081294   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.081306   79073 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:34:49.182232   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:34:49.182282   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.182556   79073 buildroot.go:166] provisioning hostname "embed-certs-920571"
	I0829 19:34:49.182582   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.182783   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.185368   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.185727   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.185751   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.185901   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.186077   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.186237   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.186379   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.186505   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.186721   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.186740   79073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-920571 && echo "embed-certs-920571" | sudo tee /etc/hostname
	I0829 19:34:49.300225   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-920571
	
	I0829 19:34:49.300261   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.303129   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.303497   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.303528   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.303682   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.303883   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.304061   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.304193   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.304466   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.304650   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.304667   79073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-920571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-920571/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-920571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:34:49.413678   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:34:49.413710   79073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:34:49.413765   79073 buildroot.go:174] setting up certificates
	I0829 19:34:49.413774   79073 provision.go:84] configureAuth start
	I0829 19:34:49.413786   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.414069   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:49.416618   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.416965   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.416993   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.417143   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.419308   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.419585   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.419630   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.419746   79073 provision.go:143] copyHostCerts
	I0829 19:34:49.419802   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:34:49.419820   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:34:49.419882   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:34:49.419973   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:34:49.419981   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:34:49.420005   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:34:49.420055   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:34:49.420063   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:34:49.420083   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:34:49.420129   79073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.embed-certs-920571 san=[127.0.0.1 192.168.61.243 embed-certs-920571 localhost minikube]
	I0829 19:34:49.488345   79073 provision.go:177] copyRemoteCerts
	I0829 19:34:49.488396   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:34:49.488418   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.490954   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.491290   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.491328   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.491473   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.491667   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.491794   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.491932   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:49.571847   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:34:49.594401   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 19:34:49.615988   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:34:49.638030   79073 provision.go:87] duration metric: took 224.241128ms to configureAuth
	I0829 19:34:49.638058   79073 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:34:49.638251   79073 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:34:49.638342   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.640876   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.641214   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.641244   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.641439   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.641662   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.641818   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.641941   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.642126   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.642292   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.642307   79073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:34:49.862247   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:34:49.862276   79073 machine.go:96] duration metric: took 784.541058ms to provisionDockerMachine
	I0829 19:34:49.862286   79073 start.go:293] postStartSetup for "embed-certs-920571" (driver="kvm2")
	I0829 19:34:49.862296   79073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:34:49.862325   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:49.862632   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:34:49.862660   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.865463   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.865871   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.865899   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.866068   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.866285   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.866459   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.866644   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:49.948826   79073 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:34:49.952779   79073 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:34:49.952800   79073 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:34:49.952858   79073 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:34:49.952935   79073 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:34:49.953034   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:34:49.962083   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:34:49.986910   79073 start.go:296] duration metric: took 124.612025ms for postStartSetup
	I0829 19:34:49.986944   79073 fix.go:56] duration metric: took 16.872279139s for fixHost
	I0829 19:34:49.986964   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.989581   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.989919   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.989946   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.990080   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.990281   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.990519   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.990662   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.990835   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.991009   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.991020   79073 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:34:50.094598   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960090.067799977
	
	I0829 19:34:50.094618   79073 fix.go:216] guest clock: 1724960090.067799977
	I0829 19:34:50.094626   79073 fix.go:229] Guest: 2024-08-29 19:34:50.067799977 +0000 UTC Remote: 2024-08-29 19:34:49.98694779 +0000 UTC m=+281.148944887 (delta=80.852187ms)
	I0829 19:34:50.094667   79073 fix.go:200] guest clock delta is within tolerance: 80.852187ms
	I0829 19:34:50.094672   79073 start.go:83] releasing machines lock for "embed-certs-920571", held for 16.98003549s
	I0829 19:34:50.094697   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.094962   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:50.097867   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.098301   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.098331   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.098494   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099007   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099190   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099276   79073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:34:50.099322   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:50.099429   79073 ssh_runner.go:195] Run: cat /version.json
	I0829 19:34:50.099453   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:50.101909   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.101932   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102283   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.102311   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102342   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.102363   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102460   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:50.102556   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:50.102647   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:50.102717   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:50.102818   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:50.102899   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:50.102964   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:50.103032   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:50.178744   79073 ssh_runner.go:195] Run: systemctl --version
	I0829 19:34:50.220024   79073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:34:50.370308   79073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:34:50.379363   79073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:34:50.379435   79073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:34:50.394787   79073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:34:50.394810   79073 start.go:495] detecting cgroup driver to use...
	I0829 19:34:50.394886   79073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:34:50.410061   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:34:50.423846   79073 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:34:50.423910   79073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:34:50.437117   79073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:34:50.450318   79073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:34:50.563588   79073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:34:50.706261   79073 docker.go:233] disabling docker service ...
	I0829 19:34:50.706356   79073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:34:50.721443   79073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:34:50.734284   79073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:34:50.871611   79073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:34:51.006487   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:34:51.019543   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:34:51.036398   79073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:34:51.036444   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.045884   79073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:34:51.045931   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.055634   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.065379   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.075104   79073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:34:51.085560   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.095777   79073 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.114679   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.125695   79073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:34:51.135263   79073 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:34:51.135328   79073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:34:51.148534   79073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:34:51.158658   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:34:51.281185   79073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:34:51.378558   79073 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:34:51.378618   79073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:34:51.383580   79073 start.go:563] Will wait 60s for crictl version
	I0829 19:34:51.383638   79073 ssh_runner.go:195] Run: which crictl
	I0829 19:34:51.387081   79073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:34:51.426413   79073 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:34:51.426491   79073 ssh_runner.go:195] Run: crio --version
	I0829 19:34:51.453777   79073 ssh_runner.go:195] Run: crio --version
	I0829 19:34:51.481306   79073 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:34:50.118500   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Start
	I0829 19:34:50.118776   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring networks are active...
	I0829 19:34:50.119618   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring network default is active
	I0829 19:34:50.120105   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring network mk-default-k8s-diff-port-672127 is active
	I0829 19:34:50.120501   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Getting domain xml...
	I0829 19:34:50.121238   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Creating domain...
	I0829 19:34:51.414344   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting to get IP...
	I0829 19:34:51.415308   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.415709   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.415790   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:51.415692   80540 retry.go:31] will retry after 256.92247ms: waiting for machine to come up
	I0829 19:34:51.674173   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.674728   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.674754   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:51.674670   80540 retry.go:31] will retry after 338.812858ms: waiting for machine to come up
	I0829 19:34:52.015450   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.015977   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.016009   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.015920   80540 retry.go:31] will retry after 385.497306ms: waiting for machine to come up
	I0829 19:34:52.403718   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.404324   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.404361   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.404259   80540 retry.go:31] will retry after 536.615454ms: waiting for machine to come up
	I0829 19:34:52.943166   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.943709   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.943736   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.943678   80540 retry.go:31] will retry after 584.895039ms: waiting for machine to come up
	I0829 19:34:51.482485   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:51.485272   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:51.485599   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:51.485632   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:51.485803   79073 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 19:34:51.490493   79073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:34:51.505212   79073 kubeadm.go:883] updating cluster {Name:embed-certs-920571 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:34:51.505359   79073 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:34:51.505413   79073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:34:51.539415   79073 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:34:51.539485   79073 ssh_runner.go:195] Run: which lz4
	I0829 19:34:51.543107   79073 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:34:51.546831   79073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:34:51.546864   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:34:52.815579   79073 crio.go:462] duration metric: took 1.272496626s to copy over tarball
	I0829 19:34:52.815659   79073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:34:53.530873   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:53.531510   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:53.531540   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:53.531452   80540 retry.go:31] will retry after 790.882954ms: waiting for machine to come up
	I0829 19:34:54.324385   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:54.324785   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:54.324817   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:54.324706   80540 retry.go:31] will retry after 815.842176ms: waiting for machine to come up
	I0829 19:34:55.142878   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:55.143350   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:55.143375   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:55.143325   80540 retry.go:31] will retry after 1.177682749s: waiting for machine to come up
	I0829 19:34:56.322780   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:56.323215   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:56.323248   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:56.323160   80540 retry.go:31] will retry after 1.158169512s: waiting for machine to come up
	I0829 19:34:57.483529   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:57.483990   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:57.484023   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:57.483917   80540 retry.go:31] will retry after 1.631842784s: waiting for machine to come up
	I0829 19:34:54.931044   79073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.115353131s)
	I0829 19:34:54.931077   79073 crio.go:469] duration metric: took 2.115468165s to extract the tarball
	I0829 19:34:54.931086   79073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:34:54.967902   79073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:34:55.006987   79073 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:34:55.007010   79073 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:34:55.007017   79073 kubeadm.go:934] updating node { 192.168.61.243 8443 v1.31.0 crio true true} ...
	I0829 19:34:55.007123   79073 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-920571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:34:55.007187   79073 ssh_runner.go:195] Run: crio config
	I0829 19:34:55.051987   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:34:55.052016   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:34:55.052039   79073 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:34:55.052077   79073 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-920571 NodeName:embed-certs-920571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:34:55.052254   79073 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-920571"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:34:55.052337   79073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:34:55.061509   79073 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:34:55.061586   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:34:55.070182   79073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 19:34:55.086180   79073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:34:55.103184   79073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 19:34:55.119226   79073 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0829 19:34:55.122845   79073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:34:55.133782   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:34:55.266431   79073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:34:55.283043   79073 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571 for IP: 192.168.61.243
	I0829 19:34:55.283066   79073 certs.go:194] generating shared ca certs ...
	I0829 19:34:55.283081   79073 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:34:55.283237   79073 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:34:55.283287   79073 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:34:55.283297   79073 certs.go:256] generating profile certs ...
	I0829 19:34:55.283438   79073 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/client.key
	I0829 19:34:55.283519   79073 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.key.dda9dcff
	I0829 19:34:55.283573   79073 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.key
	I0829 19:34:55.283708   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:34:55.283773   79073 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:34:55.283793   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:34:55.283831   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:34:55.283869   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:34:55.283901   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:34:55.283957   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:34:55.284835   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:34:55.330384   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:34:55.366718   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:34:55.393815   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:34:55.436855   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 19:34:55.463343   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:34:55.487693   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:34:55.511657   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:34:55.536017   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:34:55.558298   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:34:55.579840   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:34:55.601271   79073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:34:55.616634   79073 ssh_runner.go:195] Run: openssl version
	I0829 19:34:55.621890   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:34:55.633224   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.637431   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.637486   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.643034   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:34:55.654607   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:34:55.666297   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.670433   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.670492   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.675787   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:34:55.686953   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:34:55.697241   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.701133   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.701189   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.706242   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:34:55.716165   79073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:34:55.720159   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:34:55.727612   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:34:55.734806   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:34:55.742352   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:34:55.749483   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:34:55.756543   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:34:55.763413   79073 kubeadm.go:392] StartCluster: {Name:embed-certs-920571 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:34:55.763499   79073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:34:55.763537   79073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:34:55.803136   79073 cri.go:89] found id: ""
	I0829 19:34:55.803219   79073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:34:55.812851   79073 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:34:55.812868   79073 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:34:55.812907   79073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:34:55.823461   79073 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:34:55.824969   79073 kubeconfig.go:125] found "embed-certs-920571" server: "https://192.168.61.243:8443"
	I0829 19:34:55.828095   79073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:34:55.838579   79073 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.243
	I0829 19:34:55.838616   79073 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:34:55.838626   79073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:34:55.838669   79073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:34:55.876618   79073 cri.go:89] found id: ""
	I0829 19:34:55.876674   79073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:34:55.893401   79073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:34:55.902557   79073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:34:55.902579   79073 kubeadm.go:157] found existing configuration files:
	
	I0829 19:34:55.902631   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:34:55.911349   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:34:55.911407   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:34:55.920377   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:34:55.928764   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:34:55.928824   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:34:55.937630   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:34:55.945836   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:34:55.945897   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:34:55.954491   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:34:55.962466   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:34:55.962517   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:34:55.971080   79073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:34:55.979709   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:56.086301   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.378119   79073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.29178222s)
	I0829 19:34:57.378153   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.574026   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.655499   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.755371   79073 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:34:57.755457   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:58.255939   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:58.755813   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.117916   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:59.118404   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:59.118427   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:59.118355   80540 retry.go:31] will retry after 2.806936823s: waiting for machine to come up
	I0829 19:35:01.927079   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:01.927450   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:35:01.927473   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:35:01.927422   80540 retry.go:31] will retry after 3.008556566s: waiting for machine to come up
	I0829 19:34:59.255536   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.756296   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.802484   79073 api_server.go:72] duration metric: took 2.047112988s to wait for apiserver process to appear ...
	I0829 19:34:59.802516   79073 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:34:59.802537   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:34:59.803088   79073 api_server.go:269] stopped: https://192.168.61.243:8443/healthz: Get "https://192.168.61.243:8443/healthz": dial tcp 192.168.61.243:8443: connect: connection refused
	I0829 19:35:00.302707   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.439793   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:02.439825   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:02.439837   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.482217   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:02.482245   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:02.802617   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.811079   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:02.811116   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:03.303128   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:03.307613   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:03.307657   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:03.803189   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:03.809164   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0829 19:35:03.816623   79073 api_server.go:141] control plane version: v1.31.0
	I0829 19:35:03.816649   79073 api_server.go:131] duration metric: took 4.014126212s to wait for apiserver health ...
	I0829 19:35:03.816657   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:35:03.816664   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:03.818484   79073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:35:03.819706   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:35:03.833365   79073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:35:03.851607   79073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:35:03.861274   79073 system_pods.go:59] 8 kube-system pods found
	I0829 19:35:03.861313   79073 system_pods.go:61] "coredns-6f6b679f8f-2wrn6" [05e03841-faab-4fd4-88c9-199b39a71ba6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:35:03.861320   79073 system_pods.go:61] "etcd-embed-certs-920571" [5545a51a-3b76-4b39-b347-6f68b8d7edbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:35:03.861328   79073 system_pods.go:61] "kube-apiserver-embed-certs-920571" [cecb3e4e-9d55-4dc9-8d14-884ffbf56475] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:35:03.861334   79073 system_pods.go:61] "kube-controller-manager-embed-certs-920571" [77e06ace-0262-418f-b41c-700aabf2fa1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:35:03.861338   79073 system_pods.go:61] "kube-proxy-hflpk" [a57a1785-8ccf-4955-b5b2-19c72032d9f5] Running
	I0829 19:35:03.861353   79073 system_pods.go:61] "kube-scheduler-embed-certs-920571" [bdb2ed9c-3bf2-4e91-b6a4-ba947dab93ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:35:03.861359   79073 system_pods.go:61] "metrics-server-6867b74b74-xs5gp" [98380519-4a65-4208-b9cc-f1941a5c2f01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:35:03.861362   79073 system_pods.go:61] "storage-provisioner" [d18a769f-283f-4db3-aad0-82fc0267980f] Running
	I0829 19:35:03.861368   79073 system_pods.go:74] duration metric: took 9.738329ms to wait for pod list to return data ...
	I0829 19:35:03.861375   79073 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:35:03.865311   79073 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:35:03.865341   79073 node_conditions.go:123] node cpu capacity is 2
	I0829 19:35:03.865355   79073 node_conditions.go:105] duration metric: took 3.974661ms to run NodePressure ...
	I0829 19:35:03.865373   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:04.939084   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:04.939532   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:35:04.939567   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:35:04.939479   80540 retry.go:31] will retry after 3.738266407s: waiting for machine to come up
	I0829 19:35:04.123411   79073 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:35:04.127613   79073 kubeadm.go:739] kubelet initialised
	I0829 19:35:04.127639   79073 kubeadm.go:740] duration metric: took 4.197494ms waiting for restarted kubelet to initialise ...
	I0829 19:35:04.127649   79073 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:35:04.132339   79073 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.136884   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.136909   79073 pod_ready.go:82] duration metric: took 4.548897ms for pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.136917   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.136927   79073 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.141014   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "etcd-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.141037   79073 pod_ready.go:82] duration metric: took 4.103179ms for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.141048   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "etcd-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.141062   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.144778   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.144799   79073 pod_ready.go:82] duration metric: took 3.728001ms for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.144807   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.144812   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.255204   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.255227   79073 pod_ready.go:82] duration metric: took 110.408053ms for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.255247   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.255253   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hflpk" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.656086   79073 pod_ready.go:93] pod "kube-proxy-hflpk" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:04.656124   79073 pod_ready.go:82] duration metric: took 400.860776ms for pod "kube-proxy-hflpk" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.656137   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:06.674533   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:09.990963   79869 start.go:364] duration metric: took 3m11.14829615s to acquireMachinesLock for "old-k8s-version-467349"
	I0829 19:35:09.991026   79869 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:09.991035   79869 fix.go:54] fixHost starting: 
	I0829 19:35:09.991429   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:09.991472   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:10.011456   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0829 19:35:10.011867   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:10.012413   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:35:10.012445   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:10.012752   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:10.012960   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:10.013132   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetState
	I0829 19:35:10.014878   79869 fix.go:112] recreateIfNeeded on old-k8s-version-467349: state=Stopped err=<nil>
	I0829 19:35:10.014907   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	W0829 19:35:10.015055   79869 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:10.016684   79869 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467349" ...
	I0829 19:35:08.681559   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.682042   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Found IP for machine: 192.168.50.70
	I0829 19:35:08.682070   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has current primary IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.682080   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Reserving static IP address...
	I0829 19:35:08.682524   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Reserved static IP address: 192.168.50.70
	I0829 19:35:08.682564   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-672127", mac: "52:54:00:db:a8:cf", ip: "192.168.50.70"} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.682580   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for SSH to be available...
	I0829 19:35:08.682609   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | skip adding static IP to network mk-default-k8s-diff-port-672127 - found existing host DHCP lease matching {name: "default-k8s-diff-port-672127", mac: "52:54:00:db:a8:cf", ip: "192.168.50.70"}
	I0829 19:35:08.682623   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Getting to WaitForSSH function...
	I0829 19:35:08.684466   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.684816   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.684876   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.684957   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Using SSH client type: external
	I0829 19:35:08.684982   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa (-rw-------)
	I0829 19:35:08.685032   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:08.685053   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | About to run SSH command:
	I0829 19:35:08.685069   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | exit 0
	I0829 19:35:08.806174   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:08.806493   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetConfigRaw
	I0829 19:35:08.807134   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:08.809574   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.809900   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.809924   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.810227   79559 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/config.json ...
	I0829 19:35:08.810457   79559 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:08.810478   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:08.810675   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:08.812964   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.813337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.813368   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.813620   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:08.813815   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.813994   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.814161   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:08.814338   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:08.814533   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:08.814544   79559 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:08.914370   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:08.914415   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:08.914742   79559 buildroot.go:166] provisioning hostname "default-k8s-diff-port-672127"
	I0829 19:35:08.914782   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:08.914975   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:08.918471   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.918829   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.918857   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.919021   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:08.919186   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.919373   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.919483   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:08.919664   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:08.919865   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:08.919884   79559 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-672127 && echo "default-k8s-diff-port-672127" | sudo tee /etc/hostname
	I0829 19:35:09.032573   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-672127
	
	I0829 19:35:09.032606   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.035434   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.035811   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.035840   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.035999   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.036182   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.036350   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.036465   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.036651   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.036833   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.036852   79559 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-672127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-672127/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-672127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:09.142908   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:09.142937   79559 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:09.142978   79559 buildroot.go:174] setting up certificates
	I0829 19:35:09.142995   79559 provision.go:84] configureAuth start
	I0829 19:35:09.143010   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:09.143258   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:09.145947   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.146313   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.146339   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.146460   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.148631   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.148953   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.148978   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.149128   79559 provision.go:143] copyHostCerts
	I0829 19:35:09.149188   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:09.149204   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:09.149261   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:09.149368   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:09.149378   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:09.149400   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:09.149492   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:09.149501   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:09.149520   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:09.149578   79559 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-672127 san=[127.0.0.1 192.168.50.70 default-k8s-diff-port-672127 localhost minikube]
	I0829 19:35:09.370220   79559 provision.go:177] copyRemoteCerts
	I0829 19:35:09.370277   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:09.370301   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.373233   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.373723   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.373756   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.373966   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.374180   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.374342   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.374496   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.457104   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:35:09.481139   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:09.504611   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 19:35:09.529597   79559 provision.go:87] duration metric: took 386.586301ms to configureAuth
	I0829 19:35:09.529628   79559 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:09.529887   79559 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:35:09.529989   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.532809   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.533309   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.533342   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.533509   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.533743   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.533965   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.534169   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.534372   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.534523   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.534545   79559 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:09.754724   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:09.754752   79559 machine.go:96] duration metric: took 944.279776ms to provisionDockerMachine
	I0829 19:35:09.754766   79559 start.go:293] postStartSetup for "default-k8s-diff-port-672127" (driver="kvm2")
	I0829 19:35:09.754781   79559 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:09.754807   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.755236   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:09.755270   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.757713   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.758079   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.758125   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.758274   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.758466   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.758682   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.758823   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.841022   79559 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:09.846051   79559 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:09.846081   79559 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:09.846163   79559 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:09.846254   79559 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:09.846379   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:09.857443   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:09.884662   79559 start.go:296] duration metric: took 129.87923ms for postStartSetup
	I0829 19:35:09.884715   79559 fix.go:56] duration metric: took 19.789853711s for fixHost
	I0829 19:35:09.884739   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.888011   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.888562   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.888593   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.888789   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.888976   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.889188   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.889347   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.889533   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.889723   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.889736   79559 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:09.990749   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960109.967111721
	
	I0829 19:35:09.990772   79559 fix.go:216] guest clock: 1724960109.967111721
	I0829 19:35:09.990782   79559 fix.go:229] Guest: 2024-08-29 19:35:09.967111721 +0000 UTC Remote: 2024-08-29 19:35:09.884720437 +0000 UTC m=+231.415600706 (delta=82.391284ms)
	I0829 19:35:09.990835   79559 fix.go:200] guest clock delta is within tolerance: 82.391284ms
	I0829 19:35:09.990846   79559 start.go:83] releasing machines lock for "default-k8s-diff-port-672127", held for 19.896020367s
	I0829 19:35:09.990891   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.991180   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:09.994076   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.994434   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.994459   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.994613   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995121   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995318   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995407   79559 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:09.995464   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.995531   79559 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:09.995569   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.998302   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998673   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.998703   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998732   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.998750   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998832   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.998976   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.999026   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.999109   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.999162   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.999249   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.999404   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.999395   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:10.124503   79559 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:10.130734   79559 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:10.275859   79559 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:10.281662   79559 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:10.281728   79559 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:10.297464   79559 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:10.297488   79559 start.go:495] detecting cgroup driver to use...
	I0829 19:35:10.297553   79559 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:10.316686   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:10.332836   79559 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:10.332880   79559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:10.347021   79559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:10.364479   79559 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:10.506136   79559 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:10.659246   79559 docker.go:233] disabling docker service ...
	I0829 19:35:10.659324   79559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:10.678953   79559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:10.694844   79559 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:10.837509   79559 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:10.976512   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:10.993421   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:11.013434   79559 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:35:11.013492   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.023909   79559 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:11.023980   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.038560   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.049911   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.060235   79559 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:11.076772   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.093357   79559 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.110140   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.121770   79559 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:11.131641   79559 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:11.131697   79559 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:11.151460   79559 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:11.161320   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:11.286180   79559 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:11.382235   79559 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:11.382312   79559 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:11.388226   79559 start.go:563] Will wait 60s for crictl version
	I0829 19:35:11.388299   79559 ssh_runner.go:195] Run: which crictl
	I0829 19:35:11.391832   79559 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:11.429509   79559 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:11.429601   79559 ssh_runner.go:195] Run: crio --version
	I0829 19:35:11.457180   79559 ssh_runner.go:195] Run: crio --version
	I0829 19:35:11.487106   79559 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:35:11.488483   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:11.491607   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:11.491988   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:11.492027   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:11.492316   79559 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:11.496448   79559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:11.512045   79559 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-672127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:11.512159   79559 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:35:11.512219   79559 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:11.549212   79559 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:35:11.549287   79559 ssh_runner.go:195] Run: which lz4
	I0829 19:35:11.554151   79559 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:11.558691   79559 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:11.558718   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:35:12.826290   79559 crio.go:462] duration metric: took 1.272173781s to copy over tarball
	I0829 19:35:12.826387   79559 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:10.017965   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .Start
	I0829 19:35:10.018195   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring networks are active...
	I0829 19:35:10.018992   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network default is active
	I0829 19:35:10.019360   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network mk-old-k8s-version-467349 is active
	I0829 19:35:10.019708   79869 main.go:141] libmachine: (old-k8s-version-467349) Getting domain xml...
	I0829 19:35:10.020408   79869 main.go:141] libmachine: (old-k8s-version-467349) Creating domain...
	I0829 19:35:11.298443   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting to get IP...
	I0829 19:35:11.299521   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.300063   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.300152   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.300048   80714 retry.go:31] will retry after 253.519755ms: waiting for machine to come up
	I0829 19:35:11.555694   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.556242   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.556274   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.556187   80714 retry.go:31] will retry after 375.22671ms: waiting for machine to come up
	I0829 19:35:11.932780   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.933206   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.933233   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.933176   80714 retry.go:31] will retry after 329.139276ms: waiting for machine to come up
	I0829 19:35:12.263804   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.264471   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.264501   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.264437   80714 retry.go:31] will retry after 434.457682ms: waiting for machine to come up
	I0829 19:35:12.701184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.701773   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.701805   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.701691   80714 retry.go:31] will retry after 555.961608ms: waiting for machine to come up
	I0829 19:35:13.259670   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:13.260159   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:13.260184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:13.260080   80714 retry.go:31] will retry after 814.491179ms: waiting for machine to come up
	I0829 19:35:09.162551   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:11.165654   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:13.662027   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:15.034221   79559 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.207800368s)
	I0829 19:35:15.034254   79559 crio.go:469] duration metric: took 2.207935139s to extract the tarball
	I0829 19:35:15.034263   79559 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:15.070411   79559 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:15.117649   79559 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:35:15.117675   79559 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:35:15.117684   79559 kubeadm.go:934] updating node { 192.168.50.70 8444 v1.31.0 crio true true} ...
	I0829 19:35:15.117793   79559 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-672127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:15.117873   79559 ssh_runner.go:195] Run: crio config
	I0829 19:35:15.161749   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:35:15.161778   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:15.161795   79559 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:15.161815   79559 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-672127 NodeName:default-k8s-diff-port-672127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:35:15.161949   79559 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-672127"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:15.162002   79559 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:35:15.171789   79559 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:15.171858   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:15.181011   79559 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0829 19:35:15.197394   79559 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:15.213309   79559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0829 19:35:15.231088   79559 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:15.234732   79559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:15.245700   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:15.368430   79559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:15.385792   79559 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127 for IP: 192.168.50.70
	I0829 19:35:15.385820   79559 certs.go:194] generating shared ca certs ...
	I0829 19:35:15.385844   79559 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:15.386020   79559 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:15.386108   79559 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:15.386123   79559 certs.go:256] generating profile certs ...
	I0829 19:35:15.386240   79559 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/client.key
	I0829 19:35:15.386324   79559 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.key.828c23de
	I0829 19:35:15.386378   79559 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.key
	I0829 19:35:15.386523   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:15.386567   79559 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:15.386582   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:15.386615   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:15.386650   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:15.386680   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:15.386736   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:15.387663   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:15.429474   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:15.470861   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:15.514906   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:15.552474   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 19:35:15.581749   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:15.605874   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:15.629703   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:35:15.653589   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:15.680222   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:15.706824   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:15.733354   79559 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:15.753069   79559 ssh_runner.go:195] Run: openssl version
	I0829 19:35:15.759905   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:15.770507   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.776103   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.776159   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.783674   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:15.797519   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:15.809517   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.814243   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.814311   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.819834   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:15.830130   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:15.840473   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.844974   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.845033   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.850619   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:15.860955   79559 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:15.865359   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:15.871149   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:15.876982   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:15.882635   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:15.888020   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:15.893423   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:15.898989   79559 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-672127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:15.899085   79559 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:15.899156   79559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:15.939743   79559 cri.go:89] found id: ""
	I0829 19:35:15.939817   79559 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:15.949877   79559 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:15.949896   79559 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:15.949938   79559 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:15.959436   79559 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:15.960417   79559 kubeconfig.go:125] found "default-k8s-diff-port-672127" server: "https://192.168.50.70:8444"
	I0829 19:35:15.962469   79559 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:15.971672   79559 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0829 19:35:15.971700   79559 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:15.971710   79559 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:15.971777   79559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:16.015084   79559 cri.go:89] found id: ""
	I0829 19:35:16.015173   79559 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:16.031614   79559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:16.044359   79559 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:16.044384   79559 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:16.044448   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 19:35:16.056073   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:16.056139   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:16.066426   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 19:35:16.075300   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:16.075368   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:16.084795   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 19:35:16.093739   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:16.093804   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:16.103539   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 19:35:16.112676   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:16.112744   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:16.121997   79559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:16.134461   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:16.246853   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.577230   79559 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.330337638s)
	I0829 19:35:17.577271   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.810593   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.892546   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.993500   79559 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:17.993595   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:18.494169   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:14.076091   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.076599   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.076622   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.076549   80714 retry.go:31] will retry after 864.469682ms: waiting for machine to come up
	I0829 19:35:14.942675   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.943123   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.943154   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.943068   80714 retry.go:31] will retry after 1.062037578s: waiting for machine to come up
	I0829 19:35:16.006750   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:16.007301   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:16.007336   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:16.007212   80714 retry.go:31] will retry after 1.22747505s: waiting for machine to come up
	I0829 19:35:17.236788   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:17.237262   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:17.237291   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:17.237216   80714 retry.go:31] will retry after 1.663870598s: waiting for machine to come up
	I0829 19:35:15.662198   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:16.162890   79073 pod_ready.go:93] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:16.162919   79073 pod_ready.go:82] duration metric: took 11.506772145s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:16.162931   79073 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:18.170586   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:18.994574   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:19.493764   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:19.509384   79559 api_server.go:72] duration metric: took 1.515882118s to wait for apiserver process to appear ...
	I0829 19:35:19.509415   79559 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:35:19.509440   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:21.555577   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:21.555625   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:21.555642   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:21.572445   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:21.572481   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:22.009612   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:22.017592   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:22.017627   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:22.510148   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:22.516104   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:22.516140   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:23.009648   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:23.016342   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 200:
	ok
	I0829 19:35:23.022852   79559 api_server.go:141] control plane version: v1.31.0
	I0829 19:35:23.022878   79559 api_server.go:131] duration metric: took 3.513455745s to wait for apiserver health ...
	I0829 19:35:23.022889   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:35:23.022897   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:23.024557   79559 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:35:23.025764   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:35:23.035743   79559 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:35:23.075272   79559 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:35:23.091948   79559 system_pods.go:59] 8 kube-system pods found
	I0829 19:35:23.091991   79559 system_pods.go:61] "coredns-6f6b679f8f-p92hj" [736e7c46-b945-445f-a404-20a609f766e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:35:23.092004   79559 system_pods.go:61] "etcd-default-k8s-diff-port-672127" [cf016602-46cd-4972-bdd3-1ef5d881b6e0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:35:23.092014   79559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-672127" [eb51ac87-f5e4-4031-84fe-811da2ff8d63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:35:23.092026   79559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-672127" [caf7b777-935f-4351-b58d-60bb8175bec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:35:23.092034   79559 system_pods.go:61] "kube-proxy-tlc89" [9a11e5a6-b624-494b-8e94-d362b94fb98e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 19:35:23.092043   79559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-672127" [fe83e2af-b046-4d56-9b5c-d7a17db7e854] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:35:23.092053   79559 system_pods.go:61] "metrics-server-6867b74b74-tbkxg" [6d8f8c92-4f89-4a2a-8690-51a850768516] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:35:23.092065   79559 system_pods.go:61] "storage-provisioner" [7349bb79-c402-4587-ab0b-e52e5d455c61] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:35:23.092078   79559 system_pods.go:74] duration metric: took 16.779413ms to wait for pod list to return data ...
	I0829 19:35:23.092091   79559 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:35:23.099492   79559 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:35:23.099533   79559 node_conditions.go:123] node cpu capacity is 2
	I0829 19:35:23.099547   79559 node_conditions.go:105] duration metric: took 7.450351ms to run NodePressure ...
	I0829 19:35:23.099571   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:23.371279   79559 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:35:23.377322   79559 kubeadm.go:739] kubelet initialised
	I0829 19:35:23.377346   79559 kubeadm.go:740] duration metric: took 6.045074ms waiting for restarted kubelet to initialise ...
	I0829 19:35:23.377353   79559 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:35:23.384232   79559 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.391931   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.391960   79559 pod_ready.go:82] duration metric: took 7.702072ms for pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.391971   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.391980   79559 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.396708   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.396728   79559 pod_ready.go:82] duration metric: took 4.739691ms for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.396736   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.396744   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.401274   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.401298   79559 pod_ready.go:82] duration metric: took 4.546455ms for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.401308   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.401314   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:18.903082   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:18.903668   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:18.903691   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:18.903624   80714 retry.go:31] will retry after 2.012998698s: waiting for machine to come up
	I0829 19:35:20.918657   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:20.919143   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:20.919179   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:20.919066   80714 retry.go:31] will retry after 2.674645507s: waiting for machine to come up
	I0829 19:35:23.595218   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:23.595658   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:23.595685   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:23.595633   80714 retry.go:31] will retry after 3.052784769s: waiting for machine to come up
	I0829 19:35:20.670356   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:22.670699   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.786910   78865 start.go:364] duration metric: took 49.670356886s to acquireMachinesLock for "no-preload-690795"
	I0829 19:35:27.786963   78865 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:27.786975   78865 fix.go:54] fixHost starting: 
	I0829 19:35:27.787377   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:27.787425   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:27.803558   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0829 19:35:27.803903   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:27.804328   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:35:27.804348   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:27.804623   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:27.804824   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:27.804967   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:35:27.806332   78865 fix.go:112] recreateIfNeeded on no-preload-690795: state=Stopped err=<nil>
	I0829 19:35:27.806353   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	W0829 19:35:27.806525   78865 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:27.808678   78865 out.go:177] * Restarting existing kvm2 VM for "no-preload-690795" ...
	I0829 19:35:25.407622   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.910410   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:26.649643   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650117   79869 main.go:141] libmachine: (old-k8s-version-467349) Found IP for machine: 192.168.72.112
	I0829 19:35:26.650146   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserving static IP address...
	I0829 19:35:26.650161   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has current primary IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650553   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.650579   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserved static IP address: 192.168.72.112
	I0829 19:35:26.650600   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | skip adding static IP to network mk-old-k8s-version-467349 - found existing host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"}
	I0829 19:35:26.650611   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting for SSH to be available...
	I0829 19:35:26.650640   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Getting to WaitForSSH function...
	I0829 19:35:26.653157   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653509   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.653528   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653667   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH client type: external
	I0829 19:35:26.653690   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa (-rw-------)
	I0829 19:35:26.653724   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:26.653741   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | About to run SSH command:
	I0829 19:35:26.653755   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | exit 0
	I0829 19:35:26.778126   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:26.778436   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetConfigRaw
	I0829 19:35:26.779002   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:26.781392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.781745   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.781778   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.782006   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:35:26.782229   79869 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:26.782249   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:26.782509   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.784806   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785130   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.785148   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785300   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.785462   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785611   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785799   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.785923   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.786126   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.786138   79869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:26.886223   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:26.886256   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886522   79869 buildroot.go:166] provisioning hostname "old-k8s-version-467349"
	I0829 19:35:26.886563   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886756   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.889874   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890304   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.890324   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890471   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.890655   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890821   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890969   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.891131   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.891333   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.891348   79869 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467349 && echo "old-k8s-version-467349" | sudo tee /etc/hostname
	I0829 19:35:27.007493   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467349
	
	I0829 19:35:27.007535   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.010202   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010526   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.010548   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010737   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.010913   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011080   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011225   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.011395   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.011548   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.011564   79869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467349/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:27.123357   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:27.123385   79869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:27.123436   79869 buildroot.go:174] setting up certificates
	I0829 19:35:27.123445   79869 provision.go:84] configureAuth start
	I0829 19:35:27.123455   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:27.123760   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.126486   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.126819   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.126857   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.127013   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.129089   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129404   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.129429   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129554   79869 provision.go:143] copyHostCerts
	I0829 19:35:27.129614   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:27.129636   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:27.129704   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:27.129825   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:27.129840   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:27.129871   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:27.129946   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:27.129956   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:27.129982   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:27.130043   79869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467349 san=[127.0.0.1 192.168.72.112 localhost minikube old-k8s-version-467349]
	I0829 19:35:27.190556   79869 provision.go:177] copyRemoteCerts
	I0829 19:35:27.190610   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:27.190667   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.193785   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194205   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.194243   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194406   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.194620   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.194788   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.194962   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.276099   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:27.299820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 19:35:27.323625   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:27.347943   79869 provision.go:87] duration metric: took 224.487094ms to configureAuth
	I0829 19:35:27.347970   79869 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:27.348140   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:35:27.348203   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.351042   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.351420   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351654   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.351860   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352030   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352159   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.352321   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.352487   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.352504   79869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:27.565849   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:27.565874   79869 machine.go:96] duration metric: took 783.631791ms to provisionDockerMachine
	I0829 19:35:27.565886   79869 start.go:293] postStartSetup for "old-k8s-version-467349" (driver="kvm2")
	I0829 19:35:27.565897   79869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:27.565935   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.566274   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:27.566332   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.568900   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569225   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.569258   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569424   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.569613   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.569795   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.569961   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.648057   79869 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:27.651955   79869 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:27.651984   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:27.652057   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:27.652167   79869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:27.652311   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:27.660961   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:27.684179   79869 start.go:296] duration metric: took 118.281042ms for postStartSetup
	I0829 19:35:27.684251   79869 fix.go:56] duration metric: took 17.69321583s for fixHost
	I0829 19:35:27.684277   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.686877   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687235   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.687266   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687429   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.687615   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687751   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687863   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.687994   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.688202   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.688220   79869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:27.786754   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960127.745017542
	
	I0829 19:35:27.786773   79869 fix.go:216] guest clock: 1724960127.745017542
	I0829 19:35:27.786780   79869 fix.go:229] Guest: 2024-08-29 19:35:27.745017542 +0000 UTC Remote: 2024-08-29 19:35:27.684258077 +0000 UTC m=+208.981895804 (delta=60.759465ms)
	I0829 19:35:27.786798   79869 fix.go:200] guest clock delta is within tolerance: 60.759465ms
	I0829 19:35:27.786803   79869 start.go:83] releasing machines lock for "old-k8s-version-467349", held for 17.795804036s
	I0829 19:35:27.786823   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.787066   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.789617   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.789937   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.789967   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.790124   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790514   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790689   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790781   79869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:27.790827   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.790912   79869 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:27.790937   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.793406   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793495   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793732   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793762   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793781   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793821   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793910   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794075   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794076   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794242   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794419   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.794435   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794646   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794811   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.910665   79869 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:27.916917   79869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:28.063525   79869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:28.070848   79869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:28.070907   79869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:28.089204   79869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:28.089226   79869 start.go:495] detecting cgroup driver to use...
	I0829 19:35:28.089291   79869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:28.108528   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:28.122248   79869 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:28.122353   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:28.143014   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:28.159322   79869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:28.281356   79869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:28.445101   79869 docker.go:233] disabling docker service ...
	I0829 19:35:28.445162   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:28.460437   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:28.474849   79869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:28.609747   79869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:28.734733   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:25.170397   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.669465   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:28.748605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:28.766945   79869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 19:35:28.767014   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.776535   79869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:28.776598   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.787050   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.797552   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.807575   79869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:28.818319   79869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:28.827289   79869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:28.827342   79869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:28.839995   79869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:28.849779   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:28.979701   79869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:29.092264   79869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:29.092344   79869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:29.097310   79869 start.go:563] Will wait 60s for crictl version
	I0829 19:35:29.097366   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:29.101080   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:29.146142   79869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:29.146228   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.176037   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.210024   79869 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 19:35:27.810111   78865 main.go:141] libmachine: (no-preload-690795) Calling .Start
	I0829 19:35:27.810300   78865 main.go:141] libmachine: (no-preload-690795) Ensuring networks are active...
	I0829 19:35:27.811063   78865 main.go:141] libmachine: (no-preload-690795) Ensuring network default is active
	I0829 19:35:27.811464   78865 main.go:141] libmachine: (no-preload-690795) Ensuring network mk-no-preload-690795 is active
	I0829 19:35:27.811848   78865 main.go:141] libmachine: (no-preload-690795) Getting domain xml...
	I0829 19:35:27.812590   78865 main.go:141] libmachine: (no-preload-690795) Creating domain...
	I0829 19:35:29.131821   78865 main.go:141] libmachine: (no-preload-690795) Waiting to get IP...
	I0829 19:35:29.132876   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.133519   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.133595   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.133481   80876 retry.go:31] will retry after 252.123266ms: waiting for machine to come up
	I0829 19:35:29.387046   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.387534   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.387561   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.387496   80876 retry.go:31] will retry after 304.157394ms: waiting for machine to come up
	I0829 19:35:29.693891   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.694581   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.694603   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.694560   80876 retry.go:31] will retry after 366.980614ms: waiting for machine to come up
	I0829 19:35:30.063032   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:30.063466   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:30.063504   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:30.063431   80876 retry.go:31] will retry after 562.46082ms: waiting for machine to come up
	I0829 19:35:30.412868   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:32.908366   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:33.408823   79559 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.408848   79559 pod_ready.go:82] duration metric: took 10.007525744s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.408862   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tlc89" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.418176   79559 pod_ready.go:93] pod "kube-proxy-tlc89" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.418202   79559 pod_ready.go:82] duration metric: took 9.33136ms for pod "kube-proxy-tlc89" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.418214   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.424362   79559 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.424388   79559 pod_ready.go:82] duration metric: took 6.165646ms for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.424401   79559 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:29.211072   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:29.214489   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.214897   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:29.214932   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.215196   79869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:29.219742   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:29.233815   79869 kubeadm.go:883] updating cluster {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:29.233934   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:35:29.233994   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:29.281512   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:29.281579   79869 ssh_runner.go:195] Run: which lz4
	I0829 19:35:29.285825   79869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:29.290303   79869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:29.290349   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 19:35:30.843642   79869 crio.go:462] duration metric: took 1.557868582s to copy over tarball
	I0829 19:35:30.843714   79869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:29.670803   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:32.171154   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:30.627531   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:30.628123   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:30.628147   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:30.628030   80876 retry.go:31] will retry after 488.97189ms: waiting for machine to come up
	I0829 19:35:31.118901   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:31.119457   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:31.119480   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:31.119398   80876 retry.go:31] will retry after 801.189699ms: waiting for machine to come up
	I0829 19:35:31.921939   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:31.922447   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:31.922482   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:31.922391   80876 retry.go:31] will retry after 828.788864ms: waiting for machine to come up
	I0829 19:35:32.752986   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:32.753429   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:32.753465   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:32.753385   80876 retry.go:31] will retry after 1.404436811s: waiting for machine to come up
	I0829 19:35:34.159129   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:34.159714   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:34.159741   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:34.159678   80876 retry.go:31] will retry after 1.312099391s: waiting for machine to come up
	I0829 19:35:35.473045   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:35.473510   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:35.473549   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:35.473461   80876 retry.go:31] will retry after 1.46129368s: waiting for machine to come up
	I0829 19:35:35.431524   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:37.437993   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:33.827965   79869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984226389s)
	I0829 19:35:33.827993   79869 crio.go:469] duration metric: took 2.98432047s to extract the tarball
	I0829 19:35:33.828004   79869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:33.869606   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:33.902753   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:33.902782   79869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:33.902862   79869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.902867   79869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.902869   79869 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.902882   79869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:33.903054   79869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.903000   79869 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 19:35:33.902955   79869 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.902978   79869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.904938   79869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904960   79869 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 19:35:33.904917   79869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.904920   79869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.159604   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 19:35:34.195935   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.208324   79869 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 19:35:34.208373   79869 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 19:35:34.208414   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.229776   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.231728   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.241303   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.243523   79869 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 19:35:34.243572   79869 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.243589   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.243612   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.256377   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.291584   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.339295   79869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 19:35:34.339344   79869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.339396   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364510   79869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 19:35:34.364559   79869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.364565   79869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 19:35:34.364598   79869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.364608   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364636   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.364641   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.364642   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.370545   79869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 19:35:34.370580   79869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.370621   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.401578   79869 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 19:35:34.401628   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.401634   79869 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.401651   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.401669   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.452408   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.452472   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.452530   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.452479   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.498680   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.502698   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.502722   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.608235   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.608332   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 19:35:34.608345   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.608302   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.647702   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.647744   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.647784   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.771634   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.771691   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 19:35:34.771642   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.771742   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 19:35:34.771818   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 19:35:34.790517   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.826666   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 19:35:34.832449   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 19:35:34.850172   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 19:35:35.112084   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:35.251873   79869 cache_images.go:92] duration metric: took 1.34907399s to LoadCachedImages
	W0829 19:35:35.251967   79869 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0829 19:35:35.251984   79869 kubeadm.go:934] updating node { 192.168.72.112 8443 v1.20.0 crio true true} ...
	I0829 19:35:35.252130   79869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467349 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:35.252215   79869 ssh_runner.go:195] Run: crio config
	I0829 19:35:35.307174   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:35:35.307205   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:35.307229   79869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:35.307253   79869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467349 NodeName:old-k8s-version-467349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 19:35:35.307421   79869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467349"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:35.307498   79869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 19:35:35.317493   79869 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:35.317574   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:35.327102   79869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 19:35:35.343936   79869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:35.362420   79869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 19:35:35.379862   79869 ssh_runner.go:195] Run: grep 192.168.72.112	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:35.383595   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:35.396175   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:35.513069   79869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:35.535454   79869 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349 for IP: 192.168.72.112
	I0829 19:35:35.535481   79869 certs.go:194] generating shared ca certs ...
	I0829 19:35:35.535500   79869 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:35.535693   79869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:35.535751   79869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:35.535764   79869 certs.go:256] generating profile certs ...
	I0829 19:35:35.535885   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.key
	I0829 19:35:35.535962   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key.b97fdb0f
	I0829 19:35:35.536010   79869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key
	I0829 19:35:35.536160   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:35.536198   79869 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:35.536212   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:35.536255   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:35.536289   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:35.536345   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:35.536403   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:35.537270   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:35.573137   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:35.605232   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:35.633800   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:35.681773   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 19:35:35.711207   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:35.748040   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:35.774144   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:35:35.805029   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:35.833761   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:35.856820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:35.883402   79869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:35.902258   79869 ssh_runner.go:195] Run: openssl version
	I0829 19:35:35.908223   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:35.919106   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923368   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923414   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.930431   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:35.941856   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:35.953186   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957279   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957351   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.963886   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:35.976058   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:35.986836   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991417   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991482   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.997160   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:36.009731   79869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:36.015343   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:36.022897   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:36.028976   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:36.036658   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:36.042513   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:36.048085   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:36.053863   79869 kubeadm.go:392] StartCluster: {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:36.053944   79869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:36.053999   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.099158   79869 cri.go:89] found id: ""
	I0829 19:35:36.099230   79869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:36.109678   79869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:36.109701   79869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:36.109751   79869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:36.119674   79869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:36.120829   79869 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-467349" does not appear in /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:35:36.121495   79869 kubeconfig.go:62] /home/jenkins/minikube-integration/19531-13056/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-467349" cluster setting kubeconfig missing "old-k8s-version-467349" context setting]
	I0829 19:35:36.122505   79869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:36.221053   79869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:36.232505   79869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.112
	I0829 19:35:36.232550   79869 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:36.232562   79869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:36.232612   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.272228   79869 cri.go:89] found id: ""
	I0829 19:35:36.272290   79869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:36.290945   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:36.301665   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:36.301688   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:36.301740   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:35:36.311828   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:36.311882   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:36.322539   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:35:36.331879   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:36.331947   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:36.343057   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.352806   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:36.352867   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.362158   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:35:36.372280   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:36.372355   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:36.383178   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:36.393699   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:36.514064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.332360   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.570906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.665203   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.764043   79869 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:37.764146   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:38.264990   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:34.172082   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:36.669124   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:38.669696   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:36.936034   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:36.936510   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:36.936539   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:36.936463   80876 retry.go:31] will retry after 1.943807762s: waiting for machine to come up
	I0829 19:35:38.881644   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:38.882110   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:38.882133   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:38.882067   80876 retry.go:31] will retry after 3.173912619s: waiting for machine to come up
	I0829 19:35:39.932725   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:42.429439   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:38.764741   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.264314   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.765085   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.264910   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.264207   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.764841   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.265060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.764958   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:43.264971   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.168816   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:43.669594   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:42.059140   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:42.059668   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:42.059692   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:42.059602   80876 retry.go:31] will retry after 4.193427915s: waiting for machine to come up
	I0829 19:35:44.430473   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:46.431149   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:43.764674   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.264893   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.764345   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.264234   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.764985   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.265107   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.764222   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.264350   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.764787   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:48.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.671012   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:48.168836   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:46.256270   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.256783   78865 main.go:141] libmachine: (no-preload-690795) Found IP for machine: 192.168.39.76
	I0829 19:35:46.256806   78865 main.go:141] libmachine: (no-preload-690795) Reserving static IP address...
	I0829 19:35:46.256822   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has current primary IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.257249   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "no-preload-690795", mac: "52:54:00:2b:48:ed", ip: "192.168.39.76"} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.257274   78865 main.go:141] libmachine: (no-preload-690795) Reserved static IP address: 192.168.39.76
	I0829 19:35:46.257289   78865 main.go:141] libmachine: (no-preload-690795) DBG | skip adding static IP to network mk-no-preload-690795 - found existing host DHCP lease matching {name: "no-preload-690795", mac: "52:54:00:2b:48:ed", ip: "192.168.39.76"}
	I0829 19:35:46.257299   78865 main.go:141] libmachine: (no-preload-690795) Waiting for SSH to be available...
	I0829 19:35:46.257313   78865 main.go:141] libmachine: (no-preload-690795) DBG | Getting to WaitForSSH function...
	I0829 19:35:46.259334   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.259664   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.259692   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.259788   78865 main.go:141] libmachine: (no-preload-690795) DBG | Using SSH client type: external
	I0829 19:35:46.259821   78865 main.go:141] libmachine: (no-preload-690795) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa (-rw-------)
	I0829 19:35:46.259859   78865 main.go:141] libmachine: (no-preload-690795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:46.259871   78865 main.go:141] libmachine: (no-preload-690795) DBG | About to run SSH command:
	I0829 19:35:46.259902   78865 main.go:141] libmachine: (no-preload-690795) DBG | exit 0
	I0829 19:35:46.389869   78865 main.go:141] libmachine: (no-preload-690795) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:46.390295   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetConfigRaw
	I0829 19:35:46.390987   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:46.393890   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.394310   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.394342   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.394673   78865 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/config.json ...
	I0829 19:35:46.394846   78865 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:46.394869   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:46.395082   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.397203   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.397508   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.397535   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.397676   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.397862   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.398011   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.398178   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.398314   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.398475   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.398486   78865 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:46.502132   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:46.502163   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.502426   78865 buildroot.go:166] provisioning hostname "no-preload-690795"
	I0829 19:35:46.502449   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.502642   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.505084   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.505414   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.505443   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.505665   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.505861   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.506035   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.506219   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.506379   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.506573   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.506597   78865 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-690795 && echo "no-preload-690795" | sudo tee /etc/hostname
	I0829 19:35:46.627246   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-690795
	
	I0829 19:35:46.627269   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.630081   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.630430   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.630454   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.630611   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.630780   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.630947   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.631233   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.631397   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.631545   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.631568   78865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-690795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-690795/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-690795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:46.746055   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:46.746106   78865 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:46.746131   78865 buildroot.go:174] setting up certificates
	I0829 19:35:46.746143   78865 provision.go:84] configureAuth start
	I0829 19:35:46.746160   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.746411   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:46.749125   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.749476   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.749497   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.749642   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.751828   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.752178   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.752203   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.752317   78865 provision.go:143] copyHostCerts
	I0829 19:35:46.752384   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:46.752404   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:46.752475   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:46.752580   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:46.752591   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:46.752619   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:46.752693   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:46.752703   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:46.752728   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:46.752791   78865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.no-preload-690795 san=[127.0.0.1 192.168.39.76 localhost minikube no-preload-690795]
	I0829 19:35:46.901689   78865 provision.go:177] copyRemoteCerts
	I0829 19:35:46.901744   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:46.901764   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.904873   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.905241   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.905287   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.905458   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.905657   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.905805   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.905960   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:46.988181   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:47.011149   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 19:35:47.034849   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:47.057375   78865 provision.go:87] duration metric: took 311.217634ms to configureAuth
	I0829 19:35:47.057402   78865 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:47.057599   78865 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:35:47.057695   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.060274   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.060594   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.060620   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.060750   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.060976   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.061149   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.061311   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.061465   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:47.061676   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:47.061703   78865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:47.284836   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:47.284862   78865 machine.go:96] duration metric: took 890.004565ms to provisionDockerMachine
	I0829 19:35:47.284876   78865 start.go:293] postStartSetup for "no-preload-690795" (driver="kvm2")
	I0829 19:35:47.284889   78865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:47.284909   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.285207   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:47.285232   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.287875   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.288162   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.288180   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.288391   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.288597   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.288772   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.288899   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.372833   78865 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:47.376649   78865 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:47.376670   78865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:47.376729   78865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:47.376801   78865 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:47.376881   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:47.385721   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:47.407601   78865 start.go:296] duration metric: took 122.711153ms for postStartSetup
	I0829 19:35:47.407640   78865 fix.go:56] duration metric: took 19.620666095s for fixHost
	I0829 19:35:47.407673   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.410483   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.410873   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.410903   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.411139   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.411363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.411527   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.411674   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.411830   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:47.411987   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:47.412001   78865 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:47.518841   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960147.499237123
	
	I0829 19:35:47.518864   78865 fix.go:216] guest clock: 1724960147.499237123
	I0829 19:35:47.518872   78865 fix.go:229] Guest: 2024-08-29 19:35:47.499237123 +0000 UTC Remote: 2024-08-29 19:35:47.407643858 +0000 UTC m=+351.882891548 (delta=91.593265ms)
	I0829 19:35:47.518891   78865 fix.go:200] guest clock delta is within tolerance: 91.593265ms
	I0829 19:35:47.518896   78865 start.go:83] releasing machines lock for "no-preload-690795", held for 19.731957743s
	I0829 19:35:47.518914   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.519214   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:47.521738   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.522125   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.522153   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.522310   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.522806   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.523016   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.523082   78865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:47.523127   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.523209   78865 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:47.523225   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.526076   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526443   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.526462   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526489   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526681   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.526826   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.527005   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.527036   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.527073   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.527199   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.527197   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.527370   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.527537   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.527690   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.635450   78865 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:47.641274   78865 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:47.788805   78865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:47.794545   78865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:47.794601   78865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:47.810156   78865 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:47.810175   78865 start.go:495] detecting cgroup driver to use...
	I0829 19:35:47.810228   78865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:47.825795   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:47.839011   78865 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:47.839061   78865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:47.851854   78865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:47.864467   78865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:47.999155   78865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:48.143858   78865 docker.go:233] disabling docker service ...
	I0829 19:35:48.143921   78865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:48.157740   78865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:48.172067   78865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:48.339557   78865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:48.462950   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:48.475646   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:48.492262   78865 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:35:48.492329   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.501580   78865 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:48.501647   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.511241   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.520477   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.530413   78865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:48.540457   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.551258   78865 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.567365   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.577266   78865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:48.586423   78865 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:48.586479   78865 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:48.599527   78865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:48.608666   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:48.721808   78865 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:48.811417   78865 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:48.811495   78865 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:48.816689   78865 start.go:563] Will wait 60s for crictl version
	I0829 19:35:48.816750   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:48.820563   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:48.862786   78865 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:48.862869   78865 ssh_runner.go:195] Run: crio --version
	I0829 19:35:48.889834   78865 ssh_runner.go:195] Run: crio --version
	I0829 19:35:48.918515   78865 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:35:48.919643   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:48.922182   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:48.922530   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:48.922560   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:48.922725   78865 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:48.926877   78865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:48.939254   78865 kubeadm.go:883] updating cluster {Name:no-preload-690795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:48.939379   78865 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:35:48.939413   78865 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:48.972281   78865 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:35:48.972304   78865 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:48.972345   78865 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.972361   78865 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:48.972384   78865 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:48.972425   78865 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:48.972443   78865 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:48.972452   78865 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 19:35:48.972496   78865 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:48.972558   78865 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:48.973929   78865 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:48.973938   78865 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.973979   78865 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 19:35:48.973933   78865 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:48.973931   78865 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:48.973932   78865 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:48.973938   78865 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:48.973939   78865 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.229315   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 19:35:49.232334   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.271261   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.328903   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.339435   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.349057   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.356840   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.387705   78865 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 19:35:49.387748   78865 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 19:35:49.387760   78865 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.387777   78865 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.387808   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.387829   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.389731   78865 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 19:35:49.389769   78865 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.389809   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.438231   78865 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 19:35:49.438264   78865 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.438304   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.453177   78865 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 19:35:49.453220   78865 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.453270   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.455713   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.455767   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.455802   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.455804   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.455772   78865 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 19:35:49.455895   78865 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.455921   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.458141   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.539090   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.539125   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.568605   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.573575   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.573622   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.573575   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.678619   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.680581   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.680584   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.680671   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.699638   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.706556   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.803909   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.809759   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 19:35:49.809863   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.810356   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 19:35:49.810423   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:49.811234   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 19:35:49.811285   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:49.832040   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 19:35:49.832102   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 19:35:49.832153   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:49.832162   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:49.862517   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 19:35:49.862537   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.862578   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.862653   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 19:35:49.862696   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 19:35:49.862703   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 19:35:49.862731   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 19:35:49.862760   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 19:35:49.862788   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:35:50.192890   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.930928   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:50.931805   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:53.430716   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:48.764746   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.264755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.764703   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.264240   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.764284   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.265111   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.764316   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.264213   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.764295   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:53.264451   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.168967   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:52.169327   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:51.820978   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.958376621s)
	I0829 19:35:51.821014   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 19:35:51.821035   78865 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:51.821077   78865 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.958265625s)
	I0829 19:35:51.821109   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:51.821108   78865 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.62819044s)
	I0829 19:35:51.821211   78865 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 19:35:51.821243   78865 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:51.821275   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:51.821111   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 19:35:55.931182   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:58.431477   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:53.764946   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.265076   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.764273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.264844   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.764622   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.765120   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.265199   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.764610   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:58.264296   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.669752   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:56.670764   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:55.594240   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.773093303s)
	I0829 19:35:55.594275   78865 ssh_runner.go:235] Completed: which crictl: (3.77298113s)
	I0829 19:35:55.594290   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 19:35:55.594340   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:55.594348   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:55.594403   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:57.972145   78865 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.377784997s)
	I0829 19:35:57.972180   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.377757134s)
	I0829 19:35:57.972210   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 19:35:57.972223   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:57.972237   78865 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:57.972270   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:58.025853   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:59.843856   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.871560481s)
	I0829 19:35:59.843883   78865 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.818003416s)
	I0829 19:35:59.843887   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 19:35:59.843915   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 19:35:59.843925   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:59.844004   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:35:59.844019   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:59.849625   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 19:36:00.432638   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.078312   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:58.765060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.265033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.765033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.265144   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.764425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.764672   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.264962   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:03.264407   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.170365   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:01.668465   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.670347   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:01.294196   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.450154791s)
	I0829 19:36:01.294230   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 19:36:01.294273   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:36:01.294336   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:36:03.144937   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.850574318s)
	I0829 19:36:03.144978   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 19:36:03.145018   78865 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:36:03.145081   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:36:03.803763   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 19:36:03.803802   78865 cache_images.go:123] Successfully loaded all cached images
	I0829 19:36:03.803807   78865 cache_images.go:92] duration metric: took 14.831492974s to LoadCachedImages
	I0829 19:36:03.803818   78865 kubeadm.go:934] updating node { 192.168.39.76 8443 v1.31.0 crio true true} ...
	I0829 19:36:03.803927   78865 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-690795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:36:03.803988   78865 ssh_runner.go:195] Run: crio config
	I0829 19:36:03.854859   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:36:03.854879   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:36:03.854894   78865 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:36:03.854915   78865 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-690795 NodeName:no-preload-690795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:36:03.855055   78865 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-690795"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.76
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:36:03.855114   78865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:36:03.865163   78865 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:36:03.865236   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:36:03.874348   78865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0829 19:36:03.891540   78865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:36:03.908488   78865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0829 19:36:03.926440   78865 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0829 19:36:03.930270   78865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:36:03.942353   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:36:04.066646   78865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:36:04.083872   78865 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795 for IP: 192.168.39.76
	I0829 19:36:04.083901   78865 certs.go:194] generating shared ca certs ...
	I0829 19:36:04.083921   78865 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:36:04.084106   78865 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:36:04.084172   78865 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:36:04.084186   78865 certs.go:256] generating profile certs ...
	I0829 19:36:04.084307   78865 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/client.key
	I0829 19:36:04.084432   78865 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.key.8a2db174
	I0829 19:36:04.084492   78865 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.key
	I0829 19:36:04.084656   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:36:04.084705   78865 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:36:04.084718   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:36:04.084753   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:36:04.084790   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:36:04.084827   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:36:04.084883   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:36:04.085744   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:36:04.124689   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:36:04.158769   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:36:04.188748   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:36:04.217577   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:36:04.251166   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:36:04.282961   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:36:04.306431   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:36:04.329260   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:36:04.365050   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:36:04.393054   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:36:04.417384   78865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:36:04.434555   78865 ssh_runner.go:195] Run: openssl version
	I0829 19:36:04.440074   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:36:04.451378   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.455603   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.455655   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.461114   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:36:04.472522   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:36:04.483064   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.487316   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.487383   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.492860   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:36:04.504284   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:36:04.515522   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.519853   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.519908   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.525240   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:36:04.536612   78865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:36:04.540905   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:36:04.546622   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:36:04.552303   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:36:04.558306   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:36:04.564129   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:36:04.569635   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:36:04.575196   78865 kubeadm.go:392] StartCluster: {Name:no-preload-690795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:36:04.575279   78865 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:36:04.575360   78865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:36:04.619563   78865 cri.go:89] found id: ""
	I0829 19:36:04.619638   78865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:36:04.629655   78865 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:36:04.629675   78865 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:36:04.629785   78865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:36:04.638771   78865 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:36:04.639763   78865 kubeconfig.go:125] found "no-preload-690795" server: "https://192.168.39.76:8443"
	I0829 19:36:04.641783   78865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:36:04.650605   78865 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.76
	I0829 19:36:04.650634   78865 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:36:04.650644   78865 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:36:04.650693   78865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:36:04.685589   78865 cri.go:89] found id: ""
	I0829 19:36:04.685656   78865 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:36:04.702584   78865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:36:04.711693   78865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:36:04.711712   78865 kubeadm.go:157] found existing configuration files:
	
	I0829 19:36:04.711753   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:36:04.720291   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:36:04.720349   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:36:04.729301   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:36:04.739449   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:36:04.739513   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:36:04.748786   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:36:04.757128   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:36:04.757175   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:36:04.767533   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:36:04.777322   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:36:04.777373   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:36:04.786269   78865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:36:04.795387   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:04.904530   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:05.430803   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:07.431525   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.764403   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.764546   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.265205   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.764700   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.264837   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.764871   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.264506   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.765230   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:08.265050   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.169466   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:08.669719   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:05.750216   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:05.949551   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:06.043930   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:06.140396   78865 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:36:06.140505   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.641069   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.141458   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.161360   78865 api_server.go:72] duration metric: took 1.020963124s to wait for apiserver process to appear ...
	I0829 19:36:07.161390   78865 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:36:07.161426   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.327675   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:36:10.327707   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:36:10.327721   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.396704   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:10.396737   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:10.661699   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.666518   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:10.666544   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:11.162227   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:11.167736   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:11.167774   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:11.662428   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:11.668688   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:11.668727   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:12.162372   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:12.168297   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0829 19:36:12.175933   78865 api_server.go:141] control plane version: v1.31.0
	I0829 19:36:12.175956   78865 api_server.go:131] duration metric: took 5.014557664s to wait for apiserver health ...
	I0829 19:36:12.175967   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:36:12.175975   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:36:12.177903   78865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:36:09.930962   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:11.932180   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:08.764431   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.264876   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.764481   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.265100   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.764720   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.264283   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.764890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.264425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.764965   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:13.264557   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.669915   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:13.169150   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:12.179056   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:36:12.202639   78865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:36:12.221804   78865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:36:12.242859   78865 system_pods.go:59] 8 kube-system pods found
	I0829 19:36:12.242897   78865 system_pods.go:61] "coredns-6f6b679f8f-j8zzh" [01eaffa5-a976-441c-987c-bdf3b7f72cd6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:36:12.242905   78865 system_pods.go:61] "etcd-no-preload-690795" [df54ae59-44ff-4f7b-b6c0-6145bdae3e44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:36:12.242912   78865 system_pods.go:61] "kube-apiserver-no-preload-690795" [aee247f2-1381-4571-a671-2cf140c78196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:36:12.242919   78865 system_pods.go:61] "kube-controller-manager-no-preload-690795" [69244a85-2778-46c8-a95c-d0f8a264c0cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:36:12.242923   78865 system_pods.go:61] "kube-proxy-q4mbt" [985478f9-235d-4922-a7fd-a0cbdddf3f68] Running
	I0829 19:36:12.242934   78865 system_pods.go:61] "kube-scheduler-no-preload-690795" [e1e141ab-eb79-4c87-bccd-274f1e7495b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:36:12.242940   78865 system_pods.go:61] "metrics-server-6867b74b74-svnwn" [e096a3dc-1166-4ee3-9f3f-e044064a5a13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:36:12.242945   78865 system_pods.go:61] "storage-provisioner" [6fc868fa-2221-45ad-903e-cd3d2297a3e6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:36:12.242952   78865 system_pods.go:74] duration metric: took 21.125083ms to wait for pod list to return data ...
	I0829 19:36:12.242962   78865 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:36:12.253567   78865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:36:12.253598   78865 node_conditions.go:123] node cpu capacity is 2
	I0829 19:36:12.253612   78865 node_conditions.go:105] duration metric: took 10.645029ms to run NodePressure ...
	I0829 19:36:12.253634   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:12.514683   78865 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:36:12.520060   78865 kubeadm.go:739] kubelet initialised
	I0829 19:36:12.520082   78865 kubeadm.go:740] duration metric: took 5.371928ms waiting for restarted kubelet to initialise ...
	I0829 19:36:12.520088   78865 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:36:12.524795   78865 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:14.533484   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:14.430676   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:16.930723   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:13.765038   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.264547   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.764878   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.264485   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.765114   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.264694   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.764599   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.264540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.764523   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:18.264855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.668846   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:17.669308   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:17.031326   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:19.530568   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:19.430550   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:21.431080   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:23.431736   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:18.764781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.264280   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.764653   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.264908   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.764855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.265180   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.764470   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.264751   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.765034   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:23.264498   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.168590   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:22.168898   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:21.531983   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:22.032162   78865 pod_ready.go:93] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:22.032187   78865 pod_ready.go:82] duration metric: took 9.507358099s for pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:22.032200   78865 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.038935   78865 pod_ready.go:93] pod "etcd-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.038956   78865 pod_ready.go:82] duration metric: took 1.006750868s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.038966   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.043258   78865 pod_ready.go:93] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.043278   78865 pod_ready.go:82] duration metric: took 4.305789ms for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.043298   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.049140   78865 pod_ready.go:93] pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.049159   78865 pod_ready.go:82] duration metric: took 5.852855ms for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.049170   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-q4mbt" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.055033   78865 pod_ready.go:93] pod "kube-proxy-q4mbt" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.055054   78865 pod_ready.go:82] duration metric: took 5.87681ms for pod "kube-proxy-q4mbt" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.055067   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.229706   78865 pod_ready.go:93] pod "kube-scheduler-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.229734   78865 pod_ready.go:82] duration metric: took 174.6598ms for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.229748   78865 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:25.237141   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:25.930818   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.430312   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:23.764384   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.265090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.765183   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.264966   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.764429   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.264774   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.765090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.264524   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.764810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:28.264541   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.169024   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:26.169599   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.668840   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:27.736899   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:30.235632   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:30.430611   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.930362   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.764771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.764735   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.265228   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.764328   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.264312   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.764627   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.264891   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.765104   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:33.264462   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.669561   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.671106   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.236488   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:34.736240   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:34.931264   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:37.430665   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:33.764540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.265004   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.764934   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.264439   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.764982   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.264780   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.765081   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.264865   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.764612   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:37.764705   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:37.803674   79869 cri.go:89] found id: ""
	I0829 19:36:37.803704   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.803715   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:37.803724   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:37.803783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:37.836465   79869 cri.go:89] found id: ""
	I0829 19:36:37.836494   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.836504   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:37.836512   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:37.836574   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:37.870224   79869 cri.go:89] found id: ""
	I0829 19:36:37.870248   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.870256   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:37.870262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:37.870326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:37.904152   79869 cri.go:89] found id: ""
	I0829 19:36:37.904179   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.904187   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:37.904194   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:37.904267   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:37.939182   79869 cri.go:89] found id: ""
	I0829 19:36:37.939211   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.939220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:37.939228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:37.939293   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:37.975761   79869 cri.go:89] found id: ""
	I0829 19:36:37.975790   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.975800   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:37.975808   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:37.975910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:38.008407   79869 cri.go:89] found id: ""
	I0829 19:36:38.008430   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.008437   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:38.008444   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:38.008497   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:38.041327   79869 cri.go:89] found id: ""
	I0829 19:36:38.041360   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.041370   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:38.041381   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:38.041395   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:38.091167   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:38.091214   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:38.105093   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:38.105126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:38.227564   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:38.227599   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:38.227616   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:38.298287   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:38.298327   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:35.172336   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:37.671072   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:36.736855   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:38.736902   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:39.929907   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:41.930998   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:40.836221   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:40.849288   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:40.849357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:40.882705   79869 cri.go:89] found id: ""
	I0829 19:36:40.882732   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.882739   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:40.882745   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:40.882791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:40.917639   79869 cri.go:89] found id: ""
	I0829 19:36:40.917667   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.917679   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:40.917687   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:40.917738   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:40.953804   79869 cri.go:89] found id: ""
	I0829 19:36:40.953843   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.953854   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:40.953863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:40.953925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:40.987341   79869 cri.go:89] found id: ""
	I0829 19:36:40.987376   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.987388   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:40.987396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:40.987462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:41.026247   79869 cri.go:89] found id: ""
	I0829 19:36:41.026277   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.026290   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:41.026303   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:41.026372   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:41.064160   79869 cri.go:89] found id: ""
	I0829 19:36:41.064185   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.064194   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:41.064201   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:41.064278   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:41.115081   79869 cri.go:89] found id: ""
	I0829 19:36:41.115113   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.115124   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:41.115131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:41.115206   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:41.165472   79869 cri.go:89] found id: ""
	I0829 19:36:41.165501   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.165511   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:41.165521   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:41.165536   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:41.219322   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:41.219357   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:41.232410   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:41.232443   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:41.296216   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:41.296235   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:41.296246   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:41.375784   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:41.375824   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:40.169548   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:42.672996   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:41.236777   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.736150   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.931489   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:45.933439   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:48.431152   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.914181   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:43.926643   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:43.926716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:43.963266   79869 cri.go:89] found id: ""
	I0829 19:36:43.963289   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.963297   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:43.963303   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:43.963350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:43.998886   79869 cri.go:89] found id: ""
	I0829 19:36:43.998917   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.998926   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:43.998930   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:43.998975   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:44.033142   79869 cri.go:89] found id: ""
	I0829 19:36:44.033174   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.033183   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:44.033189   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:44.033244   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:44.066986   79869 cri.go:89] found id: ""
	I0829 19:36:44.067019   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.067031   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:44.067038   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:44.067106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:44.100228   79869 cri.go:89] found id: ""
	I0829 19:36:44.100261   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.100272   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:44.100279   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:44.100340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:44.134511   79869 cri.go:89] found id: ""
	I0829 19:36:44.134536   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.134543   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:44.134549   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:44.134615   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:44.170586   79869 cri.go:89] found id: ""
	I0829 19:36:44.170619   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.170631   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:44.170639   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:44.170692   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:44.205349   79869 cri.go:89] found id: ""
	I0829 19:36:44.205377   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.205388   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:44.205398   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:44.205413   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:44.218874   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:44.218903   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:44.294221   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:44.294241   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:44.294253   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:44.373258   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:44.373293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:44.414355   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:44.414384   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:46.964371   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:46.976756   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:46.976827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:47.009512   79869 cri.go:89] found id: ""
	I0829 19:36:47.009537   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.009547   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:47.009555   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:47.009608   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:47.042141   79869 cri.go:89] found id: ""
	I0829 19:36:47.042177   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.042190   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:47.042199   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:47.042265   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:47.074680   79869 cri.go:89] found id: ""
	I0829 19:36:47.074707   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.074718   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:47.074726   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:47.074783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:47.107014   79869 cri.go:89] found id: ""
	I0829 19:36:47.107042   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.107051   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:47.107059   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:47.107107   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:47.139770   79869 cri.go:89] found id: ""
	I0829 19:36:47.139795   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.139804   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:47.139810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:47.139862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:47.174463   79869 cri.go:89] found id: ""
	I0829 19:36:47.174502   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.174521   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:47.174532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:47.174580   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:47.206935   79869 cri.go:89] found id: ""
	I0829 19:36:47.206958   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.206966   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:47.206972   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:47.207035   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:47.250798   79869 cri.go:89] found id: ""
	I0829 19:36:47.250822   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.250829   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:47.250836   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:47.250847   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:47.320803   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:47.320824   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:47.320850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:47.394344   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:47.394379   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:47.439451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:47.439481   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:47.491070   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:47.491106   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:45.169686   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:47.169784   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:46.236187   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:48.736605   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.431543   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:52.931361   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.006196   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:50.020169   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:50.020259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:50.059323   79869 cri.go:89] found id: ""
	I0829 19:36:50.059353   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.059373   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:50.059380   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:50.059442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:50.095389   79869 cri.go:89] found id: ""
	I0829 19:36:50.095419   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.095430   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:50.095437   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:50.095499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:50.128133   79869 cri.go:89] found id: ""
	I0829 19:36:50.128162   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.128173   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:50.128180   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:50.128238   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:50.160999   79869 cri.go:89] found id: ""
	I0829 19:36:50.161021   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.161030   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:50.161035   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:50.161081   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:50.195246   79869 cri.go:89] found id: ""
	I0829 19:36:50.195268   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.195276   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:50.195282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:50.195329   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:50.229232   79869 cri.go:89] found id: ""
	I0829 19:36:50.229263   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.229273   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:50.229280   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:50.229340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:50.265141   79869 cri.go:89] found id: ""
	I0829 19:36:50.265169   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.265180   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:50.265188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:50.265251   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:50.299896   79869 cri.go:89] found id: ""
	I0829 19:36:50.299928   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.299940   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:50.299949   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:50.299963   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:50.313408   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:50.313431   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:50.382019   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:50.382037   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:50.382049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:50.462174   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:50.462211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:50.499944   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:50.499971   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.050299   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:53.064866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:53.064963   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:53.098468   79869 cri.go:89] found id: ""
	I0829 19:36:53.098492   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.098500   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:53.098506   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:53.098555   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:53.130323   79869 cri.go:89] found id: ""
	I0829 19:36:53.130354   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.130377   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:53.130385   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:53.130445   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:53.175911   79869 cri.go:89] found id: ""
	I0829 19:36:53.175941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.175951   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:53.175968   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:53.176033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:53.209834   79869 cri.go:89] found id: ""
	I0829 19:36:53.209865   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.209874   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:53.209881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:53.209959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:53.246277   79869 cri.go:89] found id: ""
	I0829 19:36:53.246322   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.246332   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:53.246340   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:53.246401   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:53.283911   79869 cri.go:89] found id: ""
	I0829 19:36:53.283941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.283953   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:53.283962   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:53.284024   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:53.315217   79869 cri.go:89] found id: ""
	I0829 19:36:53.315247   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.315257   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:53.315265   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:53.315328   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:53.348341   79869 cri.go:89] found id: ""
	I0829 19:36:53.348392   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.348405   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:53.348417   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:53.348436   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.399841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:53.399879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:53.414453   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:53.414491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:53.490003   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:53.490023   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:53.490042   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:53.565162   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:53.565198   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:49.669984   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:52.168756   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.736642   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:53.236282   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:55.430710   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:57.430791   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:56.106051   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:56.119263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:56.119345   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:56.160104   79869 cri.go:89] found id: ""
	I0829 19:36:56.160131   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.160138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:56.160144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:56.160192   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:56.196028   79869 cri.go:89] found id: ""
	I0829 19:36:56.196054   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.196062   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:56.196067   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:56.196113   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:56.229503   79869 cri.go:89] found id: ""
	I0829 19:36:56.229532   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.229539   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:56.229553   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:56.229602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:56.263904   79869 cri.go:89] found id: ""
	I0829 19:36:56.263934   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.263944   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:56.263951   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:56.264013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:56.295579   79869 cri.go:89] found id: ""
	I0829 19:36:56.295607   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.295618   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:56.295625   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:56.295680   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:56.328514   79869 cri.go:89] found id: ""
	I0829 19:36:56.328548   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.328556   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:56.328563   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:56.328620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:56.361388   79869 cri.go:89] found id: ""
	I0829 19:36:56.361418   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.361426   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:56.361431   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:56.361508   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:56.393312   79869 cri.go:89] found id: ""
	I0829 19:36:56.393345   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.393354   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:56.393362   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:56.393372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:56.446431   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:56.446472   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:56.459086   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:56.459112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:56.525526   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:56.525554   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:56.525569   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:56.609554   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:56.609592   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:54.169625   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:56.169688   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:58.170249   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:55.736101   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:58.235887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:00.236133   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:59.931992   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.430785   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:59.148291   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:59.162462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:59.162524   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:59.199732   79869 cri.go:89] found id: ""
	I0829 19:36:59.199761   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.199771   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:59.199780   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:59.199861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:59.232285   79869 cri.go:89] found id: ""
	I0829 19:36:59.232324   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.232335   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:59.232345   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:59.232415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:59.266424   79869 cri.go:89] found id: ""
	I0829 19:36:59.266452   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.266463   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:59.266471   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:59.266536   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:59.306707   79869 cri.go:89] found id: ""
	I0829 19:36:59.306733   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.306742   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:59.306748   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:59.306807   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:59.345114   79869 cri.go:89] found id: ""
	I0829 19:36:59.345144   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.345154   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:59.345162   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:59.345225   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:59.382940   79869 cri.go:89] found id: ""
	I0829 19:36:59.382963   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.382971   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:59.382977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:59.383031   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:59.420066   79869 cri.go:89] found id: ""
	I0829 19:36:59.420088   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.420095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:59.420101   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:59.420146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:59.457355   79869 cri.go:89] found id: ""
	I0829 19:36:59.457377   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.457385   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:59.457392   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:59.457409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:59.528868   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:59.528893   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:59.528908   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:59.612849   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:59.612886   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:59.649036   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:59.649064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:59.703071   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:59.703105   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.216020   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:02.229270   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:02.229351   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:02.266857   79869 cri.go:89] found id: ""
	I0829 19:37:02.266885   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.266897   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:02.266904   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:02.266967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:02.304473   79869 cri.go:89] found id: ""
	I0829 19:37:02.304501   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.304512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:02.304520   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:02.304590   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:02.338219   79869 cri.go:89] found id: ""
	I0829 19:37:02.338244   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.338253   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:02.338261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:02.338323   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:02.370974   79869 cri.go:89] found id: ""
	I0829 19:37:02.371006   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.371017   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:02.371025   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:02.371084   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:02.405871   79869 cri.go:89] found id: ""
	I0829 19:37:02.405895   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.405902   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:02.405908   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:02.405955   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:02.438516   79869 cri.go:89] found id: ""
	I0829 19:37:02.438543   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.438554   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:02.438568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:02.438630   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:02.471180   79869 cri.go:89] found id: ""
	I0829 19:37:02.471205   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.471213   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:02.471218   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:02.471276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:02.503203   79869 cri.go:89] found id: ""
	I0829 19:37:02.503227   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.503237   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:02.503248   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:02.503262   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:02.555303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:02.555337   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.567903   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:02.567927   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:02.641377   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:02.641403   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:02.641418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:02.717475   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:02.717522   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:00.669482   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.669691   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.237155   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:04.237334   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:04.431033   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:06.431419   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.431901   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:05.257326   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:05.270641   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:05.270717   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:05.303873   79869 cri.go:89] found id: ""
	I0829 19:37:05.303901   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.303909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:05.303915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:05.303959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:05.345153   79869 cri.go:89] found id: ""
	I0829 19:37:05.345176   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.345184   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:05.345189   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:05.345245   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:05.379032   79869 cri.go:89] found id: ""
	I0829 19:37:05.379059   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.379067   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:05.379073   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:05.379135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:05.412432   79869 cri.go:89] found id: ""
	I0829 19:37:05.412465   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.412476   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:05.412484   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:05.412538   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:05.445441   79869 cri.go:89] found id: ""
	I0829 19:37:05.445464   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.445471   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:05.445477   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:05.445527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:05.478921   79869 cri.go:89] found id: ""
	I0829 19:37:05.478949   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.478957   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:05.478964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:05.479011   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:05.509821   79869 cri.go:89] found id: ""
	I0829 19:37:05.509849   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.509859   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:05.509866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:05.509924   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:05.541409   79869 cri.go:89] found id: ""
	I0829 19:37:05.541435   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.541443   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:05.541451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:05.541464   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.590569   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:05.590601   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:05.604071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:05.604101   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:05.685233   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:05.685262   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:05.685277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:05.761082   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:05.761112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.299816   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:08.312964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:08.313037   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:08.344710   79869 cri.go:89] found id: ""
	I0829 19:37:08.344737   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.344745   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:08.344755   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:08.344820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:08.378185   79869 cri.go:89] found id: ""
	I0829 19:37:08.378210   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.378217   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:08.378223   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:08.378272   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:08.410619   79869 cri.go:89] found id: ""
	I0829 19:37:08.410645   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.410663   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:08.410670   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:08.410729   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:08.445494   79869 cri.go:89] found id: ""
	I0829 19:37:08.445522   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.445531   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:08.445540   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:08.445601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:08.478225   79869 cri.go:89] found id: ""
	I0829 19:37:08.478249   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.478258   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:08.478263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:08.478311   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:08.512006   79869 cri.go:89] found id: ""
	I0829 19:37:08.512032   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.512042   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:08.512049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:08.512111   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:08.546800   79869 cri.go:89] found id: ""
	I0829 19:37:08.546831   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.546841   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:08.546848   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:08.546911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:08.580353   79869 cri.go:89] found id: ""
	I0829 19:37:08.580383   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.580394   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:08.580405   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:08.580418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:08.661004   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:08.661041   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.708548   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:08.708581   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.168832   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:07.669695   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:06.736029   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.736415   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:10.930895   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:13.430209   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.761385   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:08.761418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:08.774365   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:08.774392   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:08.839864   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.340781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:11.353417   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:11.353492   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:11.388836   79869 cri.go:89] found id: ""
	I0829 19:37:11.388864   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.388873   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:11.388879   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:11.388925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:11.429655   79869 cri.go:89] found id: ""
	I0829 19:37:11.429685   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.429695   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:11.429703   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:11.429761   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:11.462122   79869 cri.go:89] found id: ""
	I0829 19:37:11.462157   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.462166   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:11.462174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:11.462236   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:11.495955   79869 cri.go:89] found id: ""
	I0829 19:37:11.495985   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.495996   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:11.496003   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:11.496063   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:11.529394   79869 cri.go:89] found id: ""
	I0829 19:37:11.529427   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.529438   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:11.529446   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:11.529513   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:11.565804   79869 cri.go:89] found id: ""
	I0829 19:37:11.565830   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.565838   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:11.565844   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:11.565903   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:11.601786   79869 cri.go:89] found id: ""
	I0829 19:37:11.601815   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.601825   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:11.601832   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:11.601889   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:11.638213   79869 cri.go:89] found id: ""
	I0829 19:37:11.638234   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.638242   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:11.638250   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:11.638260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:11.651085   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:11.651113   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:11.716834   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.716858   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:11.716872   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:11.804266   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:11.804310   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:11.846655   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:11.846684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:10.168947   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:12.669439   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:11.236100   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:13.236138   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:15.930954   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.931355   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:14.408512   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:14.420973   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:14.421033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:14.456516   79869 cri.go:89] found id: ""
	I0829 19:37:14.456540   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.456548   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:14.456553   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:14.456604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:14.489480   79869 cri.go:89] found id: ""
	I0829 19:37:14.489502   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.489512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:14.489517   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:14.489562   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:14.521821   79869 cri.go:89] found id: ""
	I0829 19:37:14.521849   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.521857   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:14.521863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:14.521911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:14.557084   79869 cri.go:89] found id: ""
	I0829 19:37:14.557116   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.557125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:14.557131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:14.557180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:14.590979   79869 cri.go:89] found id: ""
	I0829 19:37:14.591009   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.591019   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:14.591027   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:14.591088   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:14.624022   79869 cri.go:89] found id: ""
	I0829 19:37:14.624047   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.624057   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:14.624066   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:14.624131   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:14.656100   79869 cri.go:89] found id: ""
	I0829 19:37:14.656133   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.656145   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:14.656153   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:14.656214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:14.694241   79869 cri.go:89] found id: ""
	I0829 19:37:14.694276   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.694289   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:14.694302   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:14.694317   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.748276   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:14.748312   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:14.761340   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:14.761361   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:14.834815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:14.834842   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:14.834857   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:14.909857   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:14.909898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.453264   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:17.466704   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:17.466776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:17.500163   79869 cri.go:89] found id: ""
	I0829 19:37:17.500193   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.500205   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:17.500212   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:17.500269   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:17.532155   79869 cri.go:89] found id: ""
	I0829 19:37:17.532182   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.532192   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:17.532200   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:17.532259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:17.564710   79869 cri.go:89] found id: ""
	I0829 19:37:17.564737   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.564747   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:17.564754   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:17.564816   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:17.597056   79869 cri.go:89] found id: ""
	I0829 19:37:17.597091   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.597103   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:17.597111   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:17.597173   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:17.633398   79869 cri.go:89] found id: ""
	I0829 19:37:17.633424   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.633434   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:17.633442   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:17.633506   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:17.666201   79869 cri.go:89] found id: ""
	I0829 19:37:17.666243   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.666254   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:17.666262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:17.666324   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:17.700235   79869 cri.go:89] found id: ""
	I0829 19:37:17.700259   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.700266   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:17.700273   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:17.700320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:17.732060   79869 cri.go:89] found id: ""
	I0829 19:37:17.732090   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.732100   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:17.732110   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:17.732126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:17.747071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:17.747107   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:17.816644   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:17.816665   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:17.816677   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:17.895084   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:17.895134   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.935093   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:17.935125   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.669895   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.170115   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:15.736101   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.736304   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:19.736492   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:20.429878   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:22.430233   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:20.484693   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:20.497977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:20.498043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:20.531361   79869 cri.go:89] found id: ""
	I0829 19:37:20.531389   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.531400   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:20.531408   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:20.531469   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:20.569556   79869 cri.go:89] found id: ""
	I0829 19:37:20.569583   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.569594   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:20.569603   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:20.569668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:20.602350   79869 cri.go:89] found id: ""
	I0829 19:37:20.602377   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.602385   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:20.602391   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:20.602448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:20.637274   79869 cri.go:89] found id: ""
	I0829 19:37:20.637305   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.637319   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:20.637327   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:20.637388   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:20.686169   79869 cri.go:89] found id: ""
	I0829 19:37:20.686196   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.686204   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:20.686210   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:20.686257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:20.722745   79869 cri.go:89] found id: ""
	I0829 19:37:20.722775   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.722786   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:20.722794   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:20.722856   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:20.757314   79869 cri.go:89] found id: ""
	I0829 19:37:20.757337   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.757344   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:20.757349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:20.757398   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:20.790802   79869 cri.go:89] found id: ""
	I0829 19:37:20.790834   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.790844   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:20.790855   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:20.790870   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:20.840866   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:20.840898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:20.854053   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:20.854098   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:20.921717   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:20.921746   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:20.921761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:21.003362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:21.003398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:23.541356   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:23.554621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:23.554699   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:23.588155   79869 cri.go:89] found id: ""
	I0829 19:37:23.588190   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.588199   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:23.588207   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:23.588273   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:23.622917   79869 cri.go:89] found id: ""
	I0829 19:37:23.622945   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.622954   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:23.622960   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:23.623016   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:23.658615   79869 cri.go:89] found id: ""
	I0829 19:37:23.658648   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.658657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:23.658663   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:23.658720   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:23.693196   79869 cri.go:89] found id: ""
	I0829 19:37:23.693224   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.693234   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:23.693242   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:23.693309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:23.728285   79869 cri.go:89] found id: ""
	I0829 19:37:23.728317   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.728328   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:23.728336   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:23.728399   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:19.668651   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:21.669949   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:23.670402   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:22.235749   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:24.236078   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:24.431492   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:26.930440   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:23.763713   79869 cri.go:89] found id: ""
	I0829 19:37:23.763741   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.763751   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:23.763759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:23.763812   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:23.797776   79869 cri.go:89] found id: ""
	I0829 19:37:23.797801   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.797809   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:23.797814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:23.797863   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:23.832108   79869 cri.go:89] found id: ""
	I0829 19:37:23.832139   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.832151   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:23.832161   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:23.832175   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:23.880460   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:23.880490   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:23.893251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:23.893280   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:23.962079   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:23.962127   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:23.962140   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:24.048048   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:24.048088   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:26.593169   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:26.606349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:26.606426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:26.643119   79869 cri.go:89] found id: ""
	I0829 19:37:26.643143   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.643155   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:26.643161   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:26.643216   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:26.681555   79869 cri.go:89] found id: ""
	I0829 19:37:26.681579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.681591   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:26.681597   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:26.681655   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:26.718440   79869 cri.go:89] found id: ""
	I0829 19:37:26.718469   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.718479   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:26.718486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:26.718549   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:26.755249   79869 cri.go:89] found id: ""
	I0829 19:37:26.755274   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.755284   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:26.755292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:26.755356   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:26.790554   79869 cri.go:89] found id: ""
	I0829 19:37:26.790579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.790590   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:26.790597   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:26.790665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:26.826492   79869 cri.go:89] found id: ""
	I0829 19:37:26.826521   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.826530   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:26.826537   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:26.826600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:26.863456   79869 cri.go:89] found id: ""
	I0829 19:37:26.863487   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.863499   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:26.863508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:26.863579   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:26.897637   79869 cri.go:89] found id: ""
	I0829 19:37:26.897670   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.897683   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:26.897694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:26.897709   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:26.978362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:26.978400   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:27.016212   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:27.016245   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:27.078350   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:27.078386   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:27.101701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:27.101744   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:27.186720   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:26.168605   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:28.170938   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:26.735518   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:28.737503   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:29.431222   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:31.931202   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:29.686902   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:29.699814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:29.699885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:29.733867   79869 cri.go:89] found id: ""
	I0829 19:37:29.733893   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.733904   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:29.733911   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:29.733970   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:29.767910   79869 cri.go:89] found id: ""
	I0829 19:37:29.767937   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.767946   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:29.767952   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:29.767998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:29.801085   79869 cri.go:89] found id: ""
	I0829 19:37:29.801109   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.801117   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:29.801122   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:29.801166   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:29.834215   79869 cri.go:89] found id: ""
	I0829 19:37:29.834238   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.834246   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:29.834251   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:29.834307   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:29.872761   79869 cri.go:89] found id: ""
	I0829 19:37:29.872785   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.872793   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:29.872803   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:29.872847   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:29.909354   79869 cri.go:89] found id: ""
	I0829 19:37:29.909385   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.909395   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:29.909408   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:29.909468   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:29.941359   79869 cri.go:89] found id: ""
	I0829 19:37:29.941383   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.941390   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:29.941396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:29.941451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:29.973694   79869 cri.go:89] found id: ""
	I0829 19:37:29.973726   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.973736   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:29.973746   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:29.973761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:30.024863   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:30.024896   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.039092   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:30.039119   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:30.106106   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:30.106128   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:30.106143   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:30.183254   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:30.183289   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:32.722665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:32.736188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:32.736261   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:32.773039   79869 cri.go:89] found id: ""
	I0829 19:37:32.773065   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.773073   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:32.773082   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:32.773144   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:32.818204   79869 cri.go:89] found id: ""
	I0829 19:37:32.818234   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.818245   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:32.818252   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:32.818313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:32.862902   79869 cri.go:89] found id: ""
	I0829 19:37:32.862932   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.862942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:32.862949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:32.863009   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:32.908338   79869 cri.go:89] found id: ""
	I0829 19:37:32.908369   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.908380   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:32.908388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:32.908452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:32.941717   79869 cri.go:89] found id: ""
	I0829 19:37:32.941746   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.941757   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:32.941765   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:32.941827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:32.975777   79869 cri.go:89] found id: ""
	I0829 19:37:32.975806   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.975818   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:32.975827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:32.975885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:33.007518   79869 cri.go:89] found id: ""
	I0829 19:37:33.007551   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.007563   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:33.007570   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:33.007638   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:33.039902   79869 cri.go:89] found id: ""
	I0829 19:37:33.039924   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.039931   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:33.039946   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:33.039958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:33.111691   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:33.111720   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:33.111734   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:33.191036   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:33.191067   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:33.228850   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:33.228882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:33.282314   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:33.282351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.668490   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:32.669630   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:31.235788   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:33.735661   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:33.931996   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.932964   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:38.429817   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.796597   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:35.809357   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:35.809437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:35.841747   79869 cri.go:89] found id: ""
	I0829 19:37:35.841774   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.841783   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:35.841792   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:35.841850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:35.875614   79869 cri.go:89] found id: ""
	I0829 19:37:35.875639   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.875650   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:35.875657   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:35.875718   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:35.910547   79869 cri.go:89] found id: ""
	I0829 19:37:35.910571   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.910579   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:35.910585   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:35.910647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:35.949505   79869 cri.go:89] found id: ""
	I0829 19:37:35.949526   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.949533   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:35.949538   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:35.949583   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:35.984331   79869 cri.go:89] found id: ""
	I0829 19:37:35.984369   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.984381   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:35.984388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:35.984451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:36.018870   79869 cri.go:89] found id: ""
	I0829 19:37:36.018897   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.018909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:36.018917   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:36.018976   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:36.053557   79869 cri.go:89] found id: ""
	I0829 19:37:36.053593   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.053603   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:36.053611   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:36.053668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:36.087217   79869 cri.go:89] found id: ""
	I0829 19:37:36.087243   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.087254   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:36.087264   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:36.087282   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:36.141546   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:36.141577   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:36.155496   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:36.155524   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:36.225014   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:36.225038   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:36.225052   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:36.304399   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:36.304442   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:35.168843   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:37.169415   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.736103   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:37.736554   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:40.235995   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:40.430698   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.430836   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:38.842368   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:38.856085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:38.856160   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:38.893989   79869 cri.go:89] found id: ""
	I0829 19:37:38.894016   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.894024   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:38.894030   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:38.894075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:38.926756   79869 cri.go:89] found id: ""
	I0829 19:37:38.926784   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.926792   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:38.926798   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:38.926859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:38.966346   79869 cri.go:89] found id: ""
	I0829 19:37:38.966370   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.966379   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:38.966385   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:38.966442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:39.000266   79869 cri.go:89] found id: ""
	I0829 19:37:39.000291   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.000298   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:39.000307   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:39.000355   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:39.037243   79869 cri.go:89] found id: ""
	I0829 19:37:39.037269   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.037277   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:39.037282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:39.037347   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:39.068823   79869 cri.go:89] found id: ""
	I0829 19:37:39.068852   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.068864   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:39.068872   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:39.068936   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:39.099649   79869 cri.go:89] found id: ""
	I0829 19:37:39.099674   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.099682   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:39.099689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:39.099748   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:39.131764   79869 cri.go:89] found id: ""
	I0829 19:37:39.131786   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.131794   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:39.131802   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:39.131814   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:39.188087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:39.188123   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:39.200989   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:39.201015   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:39.279230   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:39.279257   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:39.279271   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:39.358667   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:39.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:41.897833   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:41.911145   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:41.911219   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:41.947096   79869 cri.go:89] found id: ""
	I0829 19:37:41.947122   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.947133   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:41.947141   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:41.947203   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:41.984267   79869 cri.go:89] found id: ""
	I0829 19:37:41.984301   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.984309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:41.984315   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:41.984384   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:42.018170   79869 cri.go:89] found id: ""
	I0829 19:37:42.018198   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.018209   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:42.018217   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:42.018281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:42.058245   79869 cri.go:89] found id: ""
	I0829 19:37:42.058269   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.058278   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:42.058283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:42.058327   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:42.093182   79869 cri.go:89] found id: ""
	I0829 19:37:42.093214   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.093226   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:42.093233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:42.093299   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:42.126013   79869 cri.go:89] found id: ""
	I0829 19:37:42.126041   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.126050   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:42.126058   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:42.126136   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:42.166568   79869 cri.go:89] found id: ""
	I0829 19:37:42.166660   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.166675   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:42.166683   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:42.166763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:42.204904   79869 cri.go:89] found id: ""
	I0829 19:37:42.204930   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.204938   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:42.204947   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:42.204960   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:42.262487   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:42.262533   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:42.275703   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:42.275730   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:42.341375   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:42.341394   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:42.341408   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:42.420981   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:42.421021   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:39.670059   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.169724   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.237785   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.736417   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.929743   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:46.930603   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.965267   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:44.979151   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:44.979204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:45.020423   79869 cri.go:89] found id: ""
	I0829 19:37:45.020448   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.020456   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:45.020461   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:45.020521   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:45.058200   79869 cri.go:89] found id: ""
	I0829 19:37:45.058225   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.058233   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:45.058238   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:45.058286   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:45.093886   79869 cri.go:89] found id: ""
	I0829 19:37:45.093909   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.093917   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:45.093923   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:45.093968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:45.127630   79869 cri.go:89] found id: ""
	I0829 19:37:45.127663   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.127674   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:45.127681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:45.127742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:45.160643   79869 cri.go:89] found id: ""
	I0829 19:37:45.160669   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.160679   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:45.160685   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:45.160742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:45.196010   79869 cri.go:89] found id: ""
	I0829 19:37:45.196035   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.196043   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:45.196050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:45.196101   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:45.229297   79869 cri.go:89] found id: ""
	I0829 19:37:45.229375   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.229395   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:45.229405   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:45.229461   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:45.267244   79869 cri.go:89] found id: ""
	I0829 19:37:45.267271   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.267281   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:45.267292   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:45.267306   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:45.280179   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:45.280201   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:45.352318   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:45.352339   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:45.352351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:45.432702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:45.432732   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:45.470540   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:45.470564   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.019771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:48.032745   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:48.032819   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:48.066895   79869 cri.go:89] found id: ""
	I0829 19:37:48.066921   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.066930   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:48.066938   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:48.066998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:48.104824   79869 cri.go:89] found id: ""
	I0829 19:37:48.104853   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.104861   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:48.104866   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:48.104931   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:48.140964   79869 cri.go:89] found id: ""
	I0829 19:37:48.140990   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.140998   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:48.141004   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:48.141051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:48.174550   79869 cri.go:89] found id: ""
	I0829 19:37:48.174578   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.174587   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:48.174593   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:48.174647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:48.207397   79869 cri.go:89] found id: ""
	I0829 19:37:48.207422   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.207430   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:48.207437   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:48.207495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:48.240948   79869 cri.go:89] found id: ""
	I0829 19:37:48.240970   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.240978   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:48.240983   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:48.241027   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:48.281058   79869 cri.go:89] found id: ""
	I0829 19:37:48.281087   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.281095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:48.281100   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:48.281151   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:48.315511   79869 cri.go:89] found id: ""
	I0829 19:37:48.315541   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.315552   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:48.315564   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:48.315580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.367680   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:48.367714   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:48.380251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:48.380285   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:48.449432   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:48.449452   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:48.449467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:48.525529   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:48.525563   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:44.669068   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:47.169440   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:46.737461   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:49.236079   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:49.431026   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.931134   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.064580   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:51.077351   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:51.077430   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:51.110018   79869 cri.go:89] found id: ""
	I0829 19:37:51.110049   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.110058   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:51.110063   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:51.110138   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:51.143667   79869 cri.go:89] found id: ""
	I0829 19:37:51.143700   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.143711   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:51.143719   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:51.143791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:51.178193   79869 cri.go:89] found id: ""
	I0829 19:37:51.178221   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.178229   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:51.178235   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:51.178285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:51.212323   79869 cri.go:89] found id: ""
	I0829 19:37:51.212352   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.212359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:51.212366   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:51.212413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:51.245724   79869 cri.go:89] found id: ""
	I0829 19:37:51.245745   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.245752   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:51.245758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:51.245832   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:51.278424   79869 cri.go:89] found id: ""
	I0829 19:37:51.278448   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.278456   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:51.278462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:51.278509   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:51.309469   79869 cri.go:89] found id: ""
	I0829 19:37:51.309498   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.309508   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:51.309516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:51.309602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:51.342596   79869 cri.go:89] found id: ""
	I0829 19:37:51.342625   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.342639   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:51.342650   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:51.342664   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:51.394045   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:51.394083   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:51.407902   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:51.407934   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:51.480759   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:51.480782   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:51.480797   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:51.565533   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:51.565570   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:49.671574   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:52.168702   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.237371   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:53.736122   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:54.430278   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.431024   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:54.107142   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:54.121083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:54.121141   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:54.156019   79869 cri.go:89] found id: ""
	I0829 19:37:54.156042   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.156050   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:54.156056   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:54.156106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:54.188748   79869 cri.go:89] found id: ""
	I0829 19:37:54.188772   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.188783   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:54.188790   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:54.188851   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:54.222044   79869 cri.go:89] found id: ""
	I0829 19:37:54.222079   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.222112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:54.222132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:54.222214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:54.254710   79869 cri.go:89] found id: ""
	I0829 19:37:54.254740   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.254750   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:54.254759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:54.254820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:54.292053   79869 cri.go:89] found id: ""
	I0829 19:37:54.292078   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.292086   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:54.292092   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:54.292153   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:54.330528   79869 cri.go:89] found id: ""
	I0829 19:37:54.330561   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.330573   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:54.330580   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:54.330653   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:54.363571   79869 cri.go:89] found id: ""
	I0829 19:37:54.363594   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.363602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:54.363608   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:54.363669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:54.395112   79869 cri.go:89] found id: ""
	I0829 19:37:54.395144   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.395166   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:54.395178   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:54.395192   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:54.408701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:54.408729   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:54.474198   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:54.474218   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:54.474231   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:54.555430   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:54.555467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.592858   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:54.592893   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.144165   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:57.157368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:57.157437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:57.194662   79869 cri.go:89] found id: ""
	I0829 19:37:57.194693   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.194706   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:57.194721   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:57.194784   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:57.226822   79869 cri.go:89] found id: ""
	I0829 19:37:57.226848   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.226856   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:57.226862   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:57.226910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:57.263892   79869 cri.go:89] found id: ""
	I0829 19:37:57.263932   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.263945   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:57.263955   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:57.264018   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:57.301202   79869 cri.go:89] found id: ""
	I0829 19:37:57.301243   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.301255   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:57.301261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:57.301317   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:57.335291   79869 cri.go:89] found id: ""
	I0829 19:37:57.335321   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.335337   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:57.335343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:57.335392   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:57.368961   79869 cri.go:89] found id: ""
	I0829 19:37:57.368983   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.368992   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:57.368997   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:57.369042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:57.401813   79869 cri.go:89] found id: ""
	I0829 19:37:57.401837   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.401844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:57.401850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:57.401906   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:57.434719   79869 cri.go:89] found id: ""
	I0829 19:37:57.434745   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.434756   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:57.434765   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:57.434777   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.484182   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:57.484217   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:57.497025   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:57.497051   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:57.569752   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:57.569776   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:57.569789   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:57.651276   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:57.651324   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.169824   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.668831   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.236564   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:58.736176   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:58.930996   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.931806   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.430980   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.189981   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:00.204723   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:00.204794   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:00.241677   79869 cri.go:89] found id: ""
	I0829 19:38:00.241700   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.241707   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:00.241713   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:00.241768   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:00.278692   79869 cri.go:89] found id: ""
	I0829 19:38:00.278726   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.278736   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:00.278744   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:00.278801   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:00.310418   79869 cri.go:89] found id: ""
	I0829 19:38:00.310448   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.310459   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:00.310466   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:00.310528   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:00.348423   79869 cri.go:89] found id: ""
	I0829 19:38:00.348446   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.348453   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:00.348459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:00.348511   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:00.380954   79869 cri.go:89] found id: ""
	I0829 19:38:00.380978   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.380985   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:00.380991   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:00.381043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:00.414783   79869 cri.go:89] found id: ""
	I0829 19:38:00.414812   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.414823   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:00.414831   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:00.414895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:00.450606   79869 cri.go:89] found id: ""
	I0829 19:38:00.450634   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.450642   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:00.450647   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:00.450696   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:00.485337   79869 cri.go:89] found id: ""
	I0829 19:38:00.485360   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.485375   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:00.485382   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:00.485399   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:00.551481   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:00.551502   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:00.551513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:00.630781   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:00.630819   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:00.676339   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:00.676363   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:00.728420   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:00.728452   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.243268   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:03.256259   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:03.256359   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:03.291103   79869 cri.go:89] found id: ""
	I0829 19:38:03.291131   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.291138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:03.291144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:03.291190   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:03.327866   79869 cri.go:89] found id: ""
	I0829 19:38:03.327898   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.327909   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:03.327917   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:03.327986   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:03.359082   79869 cri.go:89] found id: ""
	I0829 19:38:03.359110   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.359121   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:03.359129   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:03.359183   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:03.392714   79869 cri.go:89] found id: ""
	I0829 19:38:03.392741   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.392751   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:03.392758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:03.392823   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:03.427785   79869 cri.go:89] found id: ""
	I0829 19:38:03.427812   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.427820   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:03.427827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:03.427888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:03.463136   79869 cri.go:89] found id: ""
	I0829 19:38:03.463161   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.463171   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:03.463177   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:03.463230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:03.496188   79869 cri.go:89] found id: ""
	I0829 19:38:03.496225   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.496237   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:03.496244   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:03.496295   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:03.529566   79869 cri.go:89] found id: ""
	I0829 19:38:03.529591   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.529600   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:03.529609   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:03.529619   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:03.584787   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:03.584828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.599464   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:03.599509   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:03.676743   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:03.676763   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:03.676774   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:59.169059   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:01.668656   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.669716   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.736901   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.236263   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:05.431293   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:07.930953   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.757552   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:03.757605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.297887   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:06.311413   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:06.311498   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:06.345494   79869 cri.go:89] found id: ""
	I0829 19:38:06.345529   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.345539   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:06.345546   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:06.345605   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:06.377646   79869 cri.go:89] found id: ""
	I0829 19:38:06.377680   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.377691   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:06.377698   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:06.377809   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:06.416770   79869 cri.go:89] found id: ""
	I0829 19:38:06.416799   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.416810   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:06.416817   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:06.416869   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:06.451995   79869 cri.go:89] found id: ""
	I0829 19:38:06.452024   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.452034   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:06.452040   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:06.452095   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:06.484604   79869 cri.go:89] found id: ""
	I0829 19:38:06.484631   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.484642   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:06.484650   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:06.484713   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:06.517955   79869 cri.go:89] found id: ""
	I0829 19:38:06.517981   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.517988   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:06.517994   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:06.518053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:06.551069   79869 cri.go:89] found id: ""
	I0829 19:38:06.551100   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.551111   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:06.551118   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:06.551178   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:06.585340   79869 cri.go:89] found id: ""
	I0829 19:38:06.585367   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.585379   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:06.585389   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:06.585416   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:06.637942   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:06.637977   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:06.652097   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:06.652124   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:06.738226   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:06.738252   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:06.738268   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:06.817478   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:06.817519   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.168530   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:08.169657   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:05.736429   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:08.236731   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:09.931677   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.431484   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:09.360441   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:09.373372   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:09.373431   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:09.409942   79869 cri.go:89] found id: ""
	I0829 19:38:09.409970   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.409981   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:09.409989   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:09.410050   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:09.444611   79869 cri.go:89] found id: ""
	I0829 19:38:09.444639   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.444647   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:09.444652   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:09.444701   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:09.478206   79869 cri.go:89] found id: ""
	I0829 19:38:09.478233   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.478240   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:09.478246   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:09.478305   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:09.510313   79869 cri.go:89] found id: ""
	I0829 19:38:09.510340   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.510356   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:09.510361   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:09.510419   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:09.545380   79869 cri.go:89] found id: ""
	I0829 19:38:09.545412   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.545422   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:09.545429   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:09.545495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:09.578560   79869 cri.go:89] found id: ""
	I0829 19:38:09.578591   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.578600   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:09.578606   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:09.578659   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:09.613445   79869 cri.go:89] found id: ""
	I0829 19:38:09.613476   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.613484   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:09.613490   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:09.613540   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:09.649933   79869 cri.go:89] found id: ""
	I0829 19:38:09.649961   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.649970   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:09.649981   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:09.649998   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:09.662471   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:09.662496   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:09.728562   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:09.728594   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:09.728610   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:09.813152   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:09.813187   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:09.852846   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:09.852879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.403437   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:12.429787   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:12.429872   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:12.470833   79869 cri.go:89] found id: ""
	I0829 19:38:12.470858   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.470866   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:12.470871   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:12.470947   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:12.502307   79869 cri.go:89] found id: ""
	I0829 19:38:12.502334   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.502343   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:12.502351   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:12.502411   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:12.535084   79869 cri.go:89] found id: ""
	I0829 19:38:12.535108   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.535114   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:12.535120   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:12.535182   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:12.571735   79869 cri.go:89] found id: ""
	I0829 19:38:12.571762   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.571772   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:12.571779   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:12.571838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:12.604987   79869 cri.go:89] found id: ""
	I0829 19:38:12.605020   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.605029   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:12.605036   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:12.605093   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:12.639257   79869 cri.go:89] found id: ""
	I0829 19:38:12.639281   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.639289   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:12.639300   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:12.639362   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:12.674790   79869 cri.go:89] found id: ""
	I0829 19:38:12.674811   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.674818   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:12.674824   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:12.674877   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:12.711132   79869 cri.go:89] found id: ""
	I0829 19:38:12.711156   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.711164   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:12.711172   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:12.711184   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.763916   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:12.763950   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:12.777071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:12.777100   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:12.844974   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:12.845002   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:12.845017   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:12.924646   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:12.924682   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:10.668769   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.669771   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:10.736651   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.737433   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:15.236521   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:14.930832   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:16.931496   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:15.465319   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:15.478237   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:15.478315   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:15.510066   79869 cri.go:89] found id: ""
	I0829 19:38:15.510113   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.510124   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:15.510132   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:15.510180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:15.543094   79869 cri.go:89] found id: ""
	I0829 19:38:15.543117   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.543125   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:15.543138   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:15.543189   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:15.577253   79869 cri.go:89] found id: ""
	I0829 19:38:15.577279   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.577286   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:15.577292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:15.577352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:15.612073   79869 cri.go:89] found id: ""
	I0829 19:38:15.612107   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.612119   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:15.612128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:15.612196   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:15.645565   79869 cri.go:89] found id: ""
	I0829 19:38:15.645587   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.645595   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:15.645601   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:15.645646   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:15.679991   79869 cri.go:89] found id: ""
	I0829 19:38:15.680018   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.680027   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:15.680033   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:15.680109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:15.713899   79869 cri.go:89] found id: ""
	I0829 19:38:15.713923   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.713931   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:15.713937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:15.713991   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:15.750559   79869 cri.go:89] found id: ""
	I0829 19:38:15.750590   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.750601   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:15.750613   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:15.750628   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:15.762918   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:15.762943   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:15.832171   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:15.832195   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:15.832211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:15.913268   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:15.913311   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:15.951909   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:15.951935   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:18.501587   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:18.514136   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:18.514198   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:18.546937   79869 cri.go:89] found id: ""
	I0829 19:38:18.546977   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.546986   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:18.546994   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:18.547059   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:18.579227   79869 cri.go:89] found id: ""
	I0829 19:38:18.579256   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.579267   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:18.579275   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:18.579350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:18.610639   79869 cri.go:89] found id: ""
	I0829 19:38:18.610665   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.610673   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:18.610678   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:18.610739   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:18.642646   79869 cri.go:89] found id: ""
	I0829 19:38:18.642672   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.642680   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:18.642689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:18.642744   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:18.678244   79869 cri.go:89] found id: ""
	I0829 19:38:18.678264   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.678271   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:18.678277   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:18.678341   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:18.709787   79869 cri.go:89] found id: ""
	I0829 19:38:18.709812   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.709820   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:18.709826   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:18.709876   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:14.669989   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:17.169402   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:17.736005   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:20.236887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:19.430240   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:21.930946   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:18.743570   79869 cri.go:89] found id: ""
	I0829 19:38:18.743593   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.743602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:18.743610   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:18.743671   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:18.776790   79869 cri.go:89] found id: ""
	I0829 19:38:18.776815   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.776823   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:18.776831   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:18.776842   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:18.791736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:18.791765   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:18.880815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:18.880835   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:18.880849   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:18.969263   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:18.969304   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:19.005813   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:19.005843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.559810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:21.572617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:21.572682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:21.606221   79869 cri.go:89] found id: ""
	I0829 19:38:21.606245   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.606253   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:21.606259   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:21.606310   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:21.637794   79869 cri.go:89] found id: ""
	I0829 19:38:21.637822   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.637830   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:21.637835   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:21.637888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:21.671484   79869 cri.go:89] found id: ""
	I0829 19:38:21.671505   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.671515   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:21.671521   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:21.671576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:21.707212   79869 cri.go:89] found id: ""
	I0829 19:38:21.707240   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.707250   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:21.707257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:21.707320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:21.742944   79869 cri.go:89] found id: ""
	I0829 19:38:21.742964   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.742971   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:21.742977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:21.743023   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:21.779919   79869 cri.go:89] found id: ""
	I0829 19:38:21.779940   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.779947   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:21.779952   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:21.780007   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:21.819817   79869 cri.go:89] found id: ""
	I0829 19:38:21.819848   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.819858   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:21.819866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:21.819926   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:21.853791   79869 cri.go:89] found id: ""
	I0829 19:38:21.853817   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.853825   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:21.853833   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:21.853843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:21.890519   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:21.890550   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.943940   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:21.943972   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:21.956697   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:21.956724   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:22.030470   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:22.030495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:22.030513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:19.170077   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:21.670142   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:23.672076   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:22.237387   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:24.737069   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:23.934621   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:26.430632   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:24.608719   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:24.624343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:24.624403   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:24.679480   79869 cri.go:89] found id: ""
	I0829 19:38:24.679507   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.679514   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:24.679520   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:24.679589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:24.714065   79869 cri.go:89] found id: ""
	I0829 19:38:24.714114   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.714127   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:24.714134   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:24.714194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:24.751382   79869 cri.go:89] found id: ""
	I0829 19:38:24.751408   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.751417   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:24.751422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:24.751481   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:24.783549   79869 cri.go:89] found id: ""
	I0829 19:38:24.783573   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.783580   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:24.783588   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:24.783643   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:24.815500   79869 cri.go:89] found id: ""
	I0829 19:38:24.815524   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.815532   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:24.815539   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:24.815594   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:24.848225   79869 cri.go:89] found id: ""
	I0829 19:38:24.848249   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.848258   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:24.848264   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:24.848321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:24.880473   79869 cri.go:89] found id: ""
	I0829 19:38:24.880500   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.880511   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:24.880520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:24.880587   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:24.912907   79869 cri.go:89] found id: ""
	I0829 19:38:24.912941   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.912959   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:24.912967   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:24.912996   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:24.985389   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:24.985420   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:24.985437   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:25.060555   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:25.060591   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:25.099073   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:25.099099   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:25.149434   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:25.149473   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:27.664027   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:27.677971   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:27.678042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:27.715124   79869 cri.go:89] found id: ""
	I0829 19:38:27.715166   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.715179   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:27.715188   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:27.715255   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:27.748316   79869 cri.go:89] found id: ""
	I0829 19:38:27.748348   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.748361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:27.748370   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:27.748439   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:27.782075   79869 cri.go:89] found id: ""
	I0829 19:38:27.782116   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.782128   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:27.782137   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:27.782194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:27.821517   79869 cri.go:89] found id: ""
	I0829 19:38:27.821545   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.821554   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:27.821562   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:27.821621   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:27.853619   79869 cri.go:89] found id: ""
	I0829 19:38:27.853643   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.853654   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:27.853668   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:27.853723   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:27.886790   79869 cri.go:89] found id: ""
	I0829 19:38:27.886814   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.886822   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:27.886828   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:27.886883   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:27.920756   79869 cri.go:89] found id: ""
	I0829 19:38:27.920779   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.920789   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:27.920802   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:27.920861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:27.959241   79869 cri.go:89] found id: ""
	I0829 19:38:27.959267   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.959279   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:27.959289   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:27.959302   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:27.999922   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:27.999945   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:28.050616   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:28.050655   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:28.066437   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:28.066470   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:28.137427   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:28.137451   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:28.137466   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:26.168927   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:28.169453   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:27.235855   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:29.236537   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:28.929913   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:30.930403   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:32.931280   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:30.721890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:30.736387   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:30.736462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:30.773230   79869 cri.go:89] found id: ""
	I0829 19:38:30.773290   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.773304   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:30.773315   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:30.773382   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:30.806234   79869 cri.go:89] found id: ""
	I0829 19:38:30.806261   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.806271   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:30.806279   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:30.806344   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:30.841608   79869 cri.go:89] found id: ""
	I0829 19:38:30.841650   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.841674   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:30.841684   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:30.841751   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:30.875926   79869 cri.go:89] found id: ""
	I0829 19:38:30.875952   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.875960   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:30.875966   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:30.876020   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:30.914312   79869 cri.go:89] found id: ""
	I0829 19:38:30.914334   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.914341   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:30.914347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:30.914406   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:30.948819   79869 cri.go:89] found id: ""
	I0829 19:38:30.948854   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.948865   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:30.948876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:30.948937   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:30.980573   79869 cri.go:89] found id: ""
	I0829 19:38:30.980606   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.980617   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:30.980627   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:30.980688   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:31.012024   79869 cri.go:89] found id: ""
	I0829 19:38:31.012052   79869 logs.go:276] 0 containers: []
	W0829 19:38:31.012061   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:31.012071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:31.012089   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:31.076870   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:31.076896   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:31.076907   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:31.156257   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:31.156293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:31.192883   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:31.192911   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:31.246303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:31.246342   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:30.169738   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:32.669256   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:31.736303   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:34.235284   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:35.430450   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:37.931562   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:33.760372   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:33.773924   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:33.773998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:33.810019   79869 cri.go:89] found id: ""
	I0829 19:38:33.810047   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.810057   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:33.810064   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:33.810146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:33.848706   79869 cri.go:89] found id: ""
	I0829 19:38:33.848735   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.848747   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:33.848754   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:33.848822   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:33.880689   79869 cri.go:89] found id: ""
	I0829 19:38:33.880718   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.880731   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:33.880739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:33.880803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:33.911962   79869 cri.go:89] found id: ""
	I0829 19:38:33.911990   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.912000   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:33.912008   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:33.912071   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:33.948432   79869 cri.go:89] found id: ""
	I0829 19:38:33.948457   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.948468   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:33.948474   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:33.948534   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:33.981818   79869 cri.go:89] found id: ""
	I0829 19:38:33.981851   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.981859   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:33.981866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:33.981923   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:34.022072   79869 cri.go:89] found id: ""
	I0829 19:38:34.022108   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.022118   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:34.022125   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:34.022185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:34.055881   79869 cri.go:89] found id: ""
	I0829 19:38:34.055909   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.055920   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:34.055930   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:34.055944   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:34.133046   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:34.133079   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:34.175426   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:34.175457   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:34.228789   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:34.228825   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:34.243272   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:34.243322   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:34.318761   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:36.819665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:36.832516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:36.832604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:36.866781   79869 cri.go:89] found id: ""
	I0829 19:38:36.866815   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.866826   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:36.866833   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:36.866895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:36.903289   79869 cri.go:89] found id: ""
	I0829 19:38:36.903319   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.903329   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:36.903335   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:36.903383   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:36.936691   79869 cri.go:89] found id: ""
	I0829 19:38:36.936714   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.936722   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:36.936727   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:36.936776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:36.969496   79869 cri.go:89] found id: ""
	I0829 19:38:36.969525   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.969535   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:36.969541   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:36.969589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:37.001683   79869 cri.go:89] found id: ""
	I0829 19:38:37.001707   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.001715   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:37.001720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:37.001765   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:37.041189   79869 cri.go:89] found id: ""
	I0829 19:38:37.041212   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.041223   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:37.041231   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:37.041281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:37.077041   79869 cri.go:89] found id: ""
	I0829 19:38:37.077067   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.077075   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:37.077080   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:37.077135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:37.110478   79869 cri.go:89] found id: ""
	I0829 19:38:37.110506   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.110514   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:37.110523   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:37.110535   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:37.162560   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:37.162598   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:37.176466   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:37.176491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:37.244843   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:37.244861   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:37.244874   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:37.323324   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:37.323362   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:35.169023   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:37.668411   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:36.236332   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:38.236971   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:40.237468   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:39.932147   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.430752   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:39.864755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:39.877730   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:39.877789   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:39.909828   79869 cri.go:89] found id: ""
	I0829 19:38:39.909864   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.909874   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:39.909880   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:39.909941   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:39.943492   79869 cri.go:89] found id: ""
	I0829 19:38:39.943513   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.943521   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:39.943528   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:39.943586   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:39.976346   79869 cri.go:89] found id: ""
	I0829 19:38:39.976382   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.976393   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:39.976401   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:39.976455   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:40.008764   79869 cri.go:89] found id: ""
	I0829 19:38:40.008793   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.008803   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:40.008810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:40.008871   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:40.040324   79869 cri.go:89] found id: ""
	I0829 19:38:40.040356   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.040373   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:40.040381   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:40.040448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:40.072836   79869 cri.go:89] found id: ""
	I0829 19:38:40.072867   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.072880   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:40.072888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:40.072938   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:40.105437   79869 cri.go:89] found id: ""
	I0829 19:38:40.105462   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.105470   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:40.105476   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:40.105520   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:40.139447   79869 cri.go:89] found id: ""
	I0829 19:38:40.139480   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.139491   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:40.139502   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:40.139517   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.177799   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:40.177828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:40.227087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:40.227118   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:40.241116   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:40.241139   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:40.305556   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:40.305576   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:40.305590   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:42.886493   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:42.900941   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:42.901013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:42.938904   79869 cri.go:89] found id: ""
	I0829 19:38:42.938925   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.938933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:42.938946   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:42.939012   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:42.975186   79869 cri.go:89] found id: ""
	I0829 19:38:42.975213   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.975221   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:42.975227   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:42.975288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:43.009115   79869 cri.go:89] found id: ""
	I0829 19:38:43.009144   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.009152   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:43.009157   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:43.009207   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:43.044948   79869 cri.go:89] found id: ""
	I0829 19:38:43.044977   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.044987   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:43.044995   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:43.045057   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:43.079699   79869 cri.go:89] found id: ""
	I0829 19:38:43.079725   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.079732   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:43.079739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:43.079804   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:43.113742   79869 cri.go:89] found id: ""
	I0829 19:38:43.113770   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.113780   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:43.113788   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:43.113850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:43.151852   79869 cri.go:89] found id: ""
	I0829 19:38:43.151876   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.151884   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:43.151889   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:43.151939   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:43.190832   79869 cri.go:89] found id: ""
	I0829 19:38:43.190854   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.190862   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:43.190869   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:43.190882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:43.242651   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:43.242683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:43.256378   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:43.256403   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:43.333657   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:43.333684   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:43.333696   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:43.409811   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:43.409850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.170246   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.669492   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.737831   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:45.236831   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:44.930652   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:46.930941   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:45.947709   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:45.960937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:45.961013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:45.993198   79869 cri.go:89] found id: ""
	I0829 19:38:45.993230   79869 logs.go:276] 0 containers: []
	W0829 19:38:45.993242   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:45.993249   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:45.993303   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:46.031110   79869 cri.go:89] found id: ""
	I0829 19:38:46.031137   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.031148   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:46.031157   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:46.031212   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:46.065062   79869 cri.go:89] found id: ""
	I0829 19:38:46.065085   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.065093   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:46.065099   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:46.065155   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:46.099092   79869 cri.go:89] found id: ""
	I0829 19:38:46.099114   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.099122   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:46.099128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:46.099177   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:46.132426   79869 cri.go:89] found id: ""
	I0829 19:38:46.132450   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.132459   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:46.132464   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:46.132517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:46.165289   79869 cri.go:89] found id: ""
	I0829 19:38:46.165320   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.165337   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:46.165346   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:46.165415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:46.198761   79869 cri.go:89] found id: ""
	I0829 19:38:46.198786   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.198793   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:46.198799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:46.198859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:46.230621   79869 cri.go:89] found id: ""
	I0829 19:38:46.230649   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.230659   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:46.230669   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:46.230683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:46.280364   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:46.280398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:46.292854   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:46.292878   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:46.358673   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:46.358694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:46.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:46.439653   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:46.439688   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:44.669939   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:47.168670   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:47.735386   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:49.736163   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:49.431741   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:51.931271   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:48.975568   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:48.988793   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:48.988857   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:49.023697   79869 cri.go:89] found id: ""
	I0829 19:38:49.023721   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.023730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:49.023736   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:49.023791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:49.060131   79869 cri.go:89] found id: ""
	I0829 19:38:49.060153   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.060160   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:49.060166   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:49.060222   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:49.096069   79869 cri.go:89] found id: ""
	I0829 19:38:49.096101   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.096112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:49.096119   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:49.096185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:49.130316   79869 cri.go:89] found id: ""
	I0829 19:38:49.130347   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.130359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:49.130367   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:49.130434   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:49.162853   79869 cri.go:89] found id: ""
	I0829 19:38:49.162877   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.162890   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:49.162896   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:49.162956   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:49.198555   79869 cri.go:89] found id: ""
	I0829 19:38:49.198581   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.198592   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:49.198598   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:49.198663   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:49.232521   79869 cri.go:89] found id: ""
	I0829 19:38:49.232550   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.232560   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:49.232568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:49.232626   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:49.268094   79869 cri.go:89] found id: ""
	I0829 19:38:49.268124   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.268134   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:49.268145   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:49.268161   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:49.320884   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:49.320918   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:49.334244   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:49.334273   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:49.404442   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.404464   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:49.404479   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:49.482413   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:49.482451   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.021406   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:52.035517   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:52.035600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:52.068868   79869 cri.go:89] found id: ""
	I0829 19:38:52.068902   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.068909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:52.068915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:52.068971   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:52.100503   79869 cri.go:89] found id: ""
	I0829 19:38:52.100533   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.100542   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:52.100548   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:52.100620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:52.135148   79869 cri.go:89] found id: ""
	I0829 19:38:52.135189   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.135201   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:52.135208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:52.135276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:52.174469   79869 cri.go:89] found id: ""
	I0829 19:38:52.174498   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.174508   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:52.174516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:52.174576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:52.206485   79869 cri.go:89] found id: ""
	I0829 19:38:52.206508   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.206515   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:52.206520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:52.206568   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:52.240053   79869 cri.go:89] found id: ""
	I0829 19:38:52.240073   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.240080   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:52.240085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:52.240143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:52.274473   79869 cri.go:89] found id: ""
	I0829 19:38:52.274497   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.274506   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:52.274513   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:52.274576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:52.306646   79869 cri.go:89] found id: ""
	I0829 19:38:52.306669   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.306678   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:52.306686   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:52.306698   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:52.383558   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:52.383615   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.421958   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:52.421988   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:52.478024   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:52.478059   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:52.490736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:52.490772   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:52.555670   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.169856   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:51.669655   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:52.236654   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:54.735292   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:53.931350   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.430287   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:58.432418   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:55.056273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:55.068074   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:55.068147   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:55.102268   79869 cri.go:89] found id: ""
	I0829 19:38:55.102298   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.102309   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:55.102317   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:55.102368   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:55.133730   79869 cri.go:89] found id: ""
	I0829 19:38:55.133763   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.133773   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:55.133784   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:55.133848   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:55.168902   79869 cri.go:89] found id: ""
	I0829 19:38:55.168932   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.168942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:55.168949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:55.169015   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:55.206190   79869 cri.go:89] found id: ""
	I0829 19:38:55.206220   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.206231   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:55.206241   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:55.206326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:55.240178   79869 cri.go:89] found id: ""
	I0829 19:38:55.240213   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.240224   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:55.240233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:55.240313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:55.272532   79869 cri.go:89] found id: ""
	I0829 19:38:55.272559   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.272569   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:55.272575   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:55.272636   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:55.305427   79869 cri.go:89] found id: ""
	I0829 19:38:55.305457   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.305467   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:55.305473   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:55.305522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:55.337444   79869 cri.go:89] found id: ""
	I0829 19:38:55.337477   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.337489   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:55.337502   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:55.337518   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:55.402988   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:55.403019   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:55.403034   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:55.479168   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:55.479202   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:55.516345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:55.516372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:55.566716   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:55.566749   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.080261   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:58.093884   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:58.093944   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:58.126772   79869 cri.go:89] found id: ""
	I0829 19:38:58.126799   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.126808   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:58.126814   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:58.126861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:58.158344   79869 cri.go:89] found id: ""
	I0829 19:38:58.158373   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.158385   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:58.158393   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:58.158458   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:58.191524   79869 cri.go:89] found id: ""
	I0829 19:38:58.191550   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.191561   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:58.191569   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:58.191635   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:58.223336   79869 cri.go:89] found id: ""
	I0829 19:38:58.223362   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.223370   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:58.223375   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:58.223433   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:58.256223   79869 cri.go:89] found id: ""
	I0829 19:38:58.256248   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.256256   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:58.256262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:58.256321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:58.290008   79869 cri.go:89] found id: ""
	I0829 19:38:58.290035   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.290044   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:58.290049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:58.290112   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:58.324441   79869 cri.go:89] found id: ""
	I0829 19:38:58.324471   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.324488   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:58.324495   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:58.324554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:58.357324   79869 cri.go:89] found id: ""
	I0829 19:38:58.357351   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.357361   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:58.357378   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:58.357394   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.370251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:58.370277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:58.461098   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:58.461123   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:58.461138   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:58.537222   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:58.537255   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:58.574012   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:58.574043   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:54.170237   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.668188   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:58.668309   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.736467   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:59.236483   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:00.930424   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:02.931161   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:01.125646   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:01.138389   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:01.138464   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:01.172278   79869 cri.go:89] found id: ""
	I0829 19:39:01.172305   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.172313   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:01.172319   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:01.172375   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:01.207408   79869 cri.go:89] found id: ""
	I0829 19:39:01.207444   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.207455   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:01.207462   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:01.207522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:01.242683   79869 cri.go:89] found id: ""
	I0829 19:39:01.242711   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.242721   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:01.242729   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:01.242788   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:01.275683   79869 cri.go:89] found id: ""
	I0829 19:39:01.275714   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.275730   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:01.275738   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:01.275803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:01.308039   79869 cri.go:89] found id: ""
	I0829 19:39:01.308063   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.308071   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:01.308078   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:01.308137   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:01.344382   79869 cri.go:89] found id: ""
	I0829 19:39:01.344406   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.344413   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:01.344418   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:01.344466   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:01.379942   79869 cri.go:89] found id: ""
	I0829 19:39:01.379964   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.379972   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:01.379977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:01.380021   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:01.414955   79869 cri.go:89] found id: ""
	I0829 19:39:01.414981   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.414989   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:01.414997   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:01.415008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:01.469174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:01.469206   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:01.482719   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:01.482743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:01.546713   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:01.546731   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:01.546742   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:01.630655   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:01.630689   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:00.668839   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:02.670762   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:01.236788   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:03.237406   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:05.430398   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.431044   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:04.167940   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:04.180881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:04.180948   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:04.214782   79869 cri.go:89] found id: ""
	I0829 19:39:04.214809   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.214818   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:04.214824   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:04.214878   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:04.248274   79869 cri.go:89] found id: ""
	I0829 19:39:04.248300   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.248309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:04.248316   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:04.248378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:04.280622   79869 cri.go:89] found id: ""
	I0829 19:39:04.280648   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.280657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:04.280681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:04.280749   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:04.313715   79869 cri.go:89] found id: ""
	I0829 19:39:04.313746   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.313754   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:04.313759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:04.313806   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:04.345179   79869 cri.go:89] found id: ""
	I0829 19:39:04.345201   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.345209   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:04.345214   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:04.345264   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:04.377264   79869 cri.go:89] found id: ""
	I0829 19:39:04.377294   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.377304   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:04.377315   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:04.377378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:04.410005   79869 cri.go:89] found id: ""
	I0829 19:39:04.410028   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.410034   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:04.410039   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:04.410109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:04.444345   79869 cri.go:89] found id: ""
	I0829 19:39:04.444373   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.444383   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:04.444393   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:04.444409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:04.488071   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:04.488103   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:04.539394   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:04.539427   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:04.552285   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:04.552320   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:04.620973   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:04.620992   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:04.621006   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.201149   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:07.213392   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:07.213452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:07.249778   79869 cri.go:89] found id: ""
	I0829 19:39:07.249801   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.249812   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:07.249817   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:07.249864   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:07.282763   79869 cri.go:89] found id: ""
	I0829 19:39:07.282792   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.282799   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:07.282805   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:07.282852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:07.316882   79869 cri.go:89] found id: ""
	I0829 19:39:07.316920   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.316932   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:07.316940   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:07.316990   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:07.348474   79869 cri.go:89] found id: ""
	I0829 19:39:07.348505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.348516   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:07.348532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:07.348606   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:07.381442   79869 cri.go:89] found id: ""
	I0829 19:39:07.381467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.381474   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:07.381479   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:07.381535   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:07.414935   79869 cri.go:89] found id: ""
	I0829 19:39:07.414968   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.414981   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:07.414990   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:07.415053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:07.448427   79869 cri.go:89] found id: ""
	I0829 19:39:07.448467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.448479   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:07.448486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:07.448544   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:07.480475   79869 cri.go:89] found id: ""
	I0829 19:39:07.480505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.480515   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:07.480526   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:07.480540   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:07.532732   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:07.532766   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:07.546366   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:07.546411   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:07.615661   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:07.615679   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:07.615690   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.696874   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:07.696909   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:05.169920   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.170223   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:05.735375   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.737017   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:10.235794   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:09.930945   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:11.931285   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:10.236118   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:10.249347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:10.249413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:10.280412   79869 cri.go:89] found id: ""
	I0829 19:39:10.280436   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.280446   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:10.280451   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:10.280499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:10.313091   79869 cri.go:89] found id: ""
	I0829 19:39:10.313119   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.313126   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:10.313132   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:10.313187   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:10.347208   79869 cri.go:89] found id: ""
	I0829 19:39:10.347243   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.347252   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:10.347257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:10.347306   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:10.380658   79869 cri.go:89] found id: ""
	I0829 19:39:10.380686   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.380696   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:10.380703   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:10.380750   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:10.412573   79869 cri.go:89] found id: ""
	I0829 19:39:10.412601   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.412613   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:10.412621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:10.412682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:10.449655   79869 cri.go:89] found id: ""
	I0829 19:39:10.449683   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.449691   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:10.449698   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:10.449759   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:10.485157   79869 cri.go:89] found id: ""
	I0829 19:39:10.485184   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.485195   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:10.485203   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:10.485262   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:10.522628   79869 cri.go:89] found id: ""
	I0829 19:39:10.522656   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.522666   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:10.522673   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:10.522684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:10.541079   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:10.541114   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:10.633462   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:10.633495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:10.633512   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:10.714315   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:10.714354   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:10.751345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:10.751371   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.306786   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:13.319368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:13.319447   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:13.353999   79869 cri.go:89] found id: ""
	I0829 19:39:13.354029   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.354039   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:13.354047   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:13.354124   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:13.386953   79869 cri.go:89] found id: ""
	I0829 19:39:13.386982   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.386992   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:13.387000   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:13.387053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:13.425835   79869 cri.go:89] found id: ""
	I0829 19:39:13.425860   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.425869   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:13.425876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:13.425942   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:13.462808   79869 cri.go:89] found id: ""
	I0829 19:39:13.462835   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.462843   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:13.462849   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:13.462895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:13.495194   79869 cri.go:89] found id: ""
	I0829 19:39:13.495228   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.495240   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:13.495248   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:13.495309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:13.527239   79869 cri.go:89] found id: ""
	I0829 19:39:13.527268   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.527277   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:13.527283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:13.527357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:13.559081   79869 cri.go:89] found id: ""
	I0829 19:39:13.559110   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.559121   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:13.559128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:13.559191   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:13.590723   79869 cri.go:89] found id: ""
	I0829 19:39:13.590748   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.590757   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:13.590767   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:13.590781   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.645718   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:13.645751   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:13.659224   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:13.659250   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:13.733532   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:13.733566   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:13.733580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:09.669065   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:12.169167   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:12.236756   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:14.237536   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:14.431203   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:16.930983   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:13.813639   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:13.813680   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.355269   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:16.377328   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:16.377395   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:16.437904   79869 cri.go:89] found id: ""
	I0829 19:39:16.437926   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.437933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:16.437939   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:16.437987   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:16.470254   79869 cri.go:89] found id: ""
	I0829 19:39:16.470279   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.470287   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:16.470293   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:16.470353   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:16.502125   79869 cri.go:89] found id: ""
	I0829 19:39:16.502165   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.502177   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:16.502186   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:16.502242   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:16.539754   79869 cri.go:89] found id: ""
	I0829 19:39:16.539781   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.539791   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:16.539799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:16.539862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:16.576191   79869 cri.go:89] found id: ""
	I0829 19:39:16.576218   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.576229   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:16.576236   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:16.576292   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:16.610183   79869 cri.go:89] found id: ""
	I0829 19:39:16.610208   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.610219   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:16.610226   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:16.610285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:16.642568   79869 cri.go:89] found id: ""
	I0829 19:39:16.642605   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.642614   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:16.642624   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:16.642689   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:16.675990   79869 cri.go:89] found id: ""
	I0829 19:39:16.676017   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.676025   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:16.676033   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:16.676049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:16.739204   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:16.739222   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:16.739233   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:16.816427   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:16.816460   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.851816   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:16.851850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:16.903922   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:16.903958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:14.169307   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:16.163640   79073 pod_ready.go:82] duration metric: took 4m0.000694226s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" ...
	E0829 19:39:16.163683   79073 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:39:16.163706   79073 pod_ready.go:39] duration metric: took 4m12.036045825s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:16.163738   79073 kubeadm.go:597] duration metric: took 4m20.35086556s to restartPrimaryControlPlane
	W0829 19:39:16.163795   79073 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:16.163827   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:16.736978   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.236047   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.431674   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:21.930447   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.418163   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:19.432617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:19.432676   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:19.464691   79869 cri.go:89] found id: ""
	I0829 19:39:19.464718   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.464730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:19.464737   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:19.464793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:19.496265   79869 cri.go:89] found id: ""
	I0829 19:39:19.496291   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.496302   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:19.496310   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:19.496397   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:19.527395   79869 cri.go:89] found id: ""
	I0829 19:39:19.527422   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.527433   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:19.527440   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:19.527501   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:19.558377   79869 cri.go:89] found id: ""
	I0829 19:39:19.558404   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.558414   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:19.558422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:19.558484   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:19.589687   79869 cri.go:89] found id: ""
	I0829 19:39:19.589710   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.589718   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:19.589724   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:19.589813   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:19.624051   79869 cri.go:89] found id: ""
	I0829 19:39:19.624077   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.624086   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:19.624097   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:19.624143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:19.656248   79869 cri.go:89] found id: ""
	I0829 19:39:19.656282   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.656293   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:19.656301   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:19.656364   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:19.689299   79869 cri.go:89] found id: ""
	I0829 19:39:19.689328   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.689338   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:19.689349   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:19.689365   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:19.739952   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:19.739982   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:19.753169   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:19.753197   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:19.816948   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:19.816971   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:19.816983   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:19.892233   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:19.892270   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.432456   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:22.444842   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:22.444915   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:22.475864   79869 cri.go:89] found id: ""
	I0829 19:39:22.475888   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.475899   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:22.475907   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:22.475954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:22.506824   79869 cri.go:89] found id: ""
	I0829 19:39:22.506851   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.506858   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:22.506864   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:22.506909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:22.544960   79869 cri.go:89] found id: ""
	I0829 19:39:22.544984   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.545002   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:22.545009   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:22.545074   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:22.584077   79869 cri.go:89] found id: ""
	I0829 19:39:22.584098   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.584106   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:22.584114   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:22.584169   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:22.621180   79869 cri.go:89] found id: ""
	I0829 19:39:22.621208   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.621220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:22.621228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:22.621288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:22.658111   79869 cri.go:89] found id: ""
	I0829 19:39:22.658139   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.658151   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:22.658158   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:22.658220   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:22.695654   79869 cri.go:89] found id: ""
	I0829 19:39:22.695679   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.695686   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:22.695692   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:22.695742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:22.733092   79869 cri.go:89] found id: ""
	I0829 19:39:22.733169   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.733184   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:22.733196   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:22.733212   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:22.808449   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:22.808469   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:22.808485   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:22.889239   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:22.889275   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.933487   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:22.933513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:22.983137   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:22.983178   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:21.236189   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:23.236347   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:25.237213   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:23.932634   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:26.431145   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:28.431496   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:25.496668   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:25.509508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:25.509572   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:25.544292   79869 cri.go:89] found id: ""
	I0829 19:39:25.544321   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.544334   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:25.544341   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:25.544400   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:25.576739   79869 cri.go:89] found id: ""
	I0829 19:39:25.576768   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.576779   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:25.576787   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:25.576840   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:25.608040   79869 cri.go:89] found id: ""
	I0829 19:39:25.608067   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.608075   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:25.608081   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:25.608127   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:25.639675   79869 cri.go:89] found id: ""
	I0829 19:39:25.639703   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.639712   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:25.639720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:25.639785   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:25.676966   79869 cri.go:89] found id: ""
	I0829 19:39:25.676995   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.677007   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:25.677014   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:25.677075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:25.712310   79869 cri.go:89] found id: ""
	I0829 19:39:25.712334   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.712341   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:25.712347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:25.712393   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:25.746172   79869 cri.go:89] found id: ""
	I0829 19:39:25.746196   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.746203   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:25.746208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:25.746257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:25.778476   79869 cri.go:89] found id: ""
	I0829 19:39:25.778497   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.778506   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:25.778514   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:25.778525   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:25.817791   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:25.817820   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:25.874597   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:25.874634   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:25.887469   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:25.887493   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:25.957308   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:25.957329   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:25.957348   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:28.536826   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:28.550981   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:28.551038   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:28.586607   79869 cri.go:89] found id: ""
	I0829 19:39:28.586636   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.586647   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:28.586656   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:28.586716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:28.627696   79869 cri.go:89] found id: ""
	I0829 19:39:28.627720   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.627728   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:28.627734   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:28.627793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:28.659877   79869 cri.go:89] found id: ""
	I0829 19:39:28.659906   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.659915   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:28.659920   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:28.659967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:28.694834   79869 cri.go:89] found id: ""
	I0829 19:39:28.694861   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.694868   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:28.694874   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:28.694934   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:28.728833   79869 cri.go:89] found id: ""
	I0829 19:39:28.728866   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.728878   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:28.728888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:28.728951   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:27.237871   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:29.735887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:30.931849   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:33.424593   79559 pod_ready.go:82] duration metric: took 4m0.000177098s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" ...
	E0829 19:39:33.424633   79559 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:39:33.424656   79559 pod_ready.go:39] duration metric: took 4m10.047294609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:33.424687   79559 kubeadm.go:597] duration metric: took 4m17.474785939s to restartPrimaryControlPlane
	W0829 19:39:33.424745   79559 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:33.424773   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:28.762236   79869 cri.go:89] found id: ""
	I0829 19:39:28.762269   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.762279   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:28.762286   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:28.762352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:28.794534   79869 cri.go:89] found id: ""
	I0829 19:39:28.794570   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.794583   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:28.794590   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:28.794660   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:28.827193   79869 cri.go:89] found id: ""
	I0829 19:39:28.827222   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.827233   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:28.827244   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:28.827260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:28.878905   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:28.878936   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:28.891795   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:28.891826   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:28.966249   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:28.966278   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:28.966294   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:29.044383   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:29.044417   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.582383   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:31.595250   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:31.595333   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:31.628763   79869 cri.go:89] found id: ""
	I0829 19:39:31.628791   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.628800   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:31.628805   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:31.628852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:31.663489   79869 cri.go:89] found id: ""
	I0829 19:39:31.663521   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.663531   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:31.663537   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:31.663598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:31.698248   79869 cri.go:89] found id: ""
	I0829 19:39:31.698275   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.698283   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:31.698289   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:31.698340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:31.732499   79869 cri.go:89] found id: ""
	I0829 19:39:31.732527   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.732536   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:31.732544   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:31.732601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:31.773831   79869 cri.go:89] found id: ""
	I0829 19:39:31.773853   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.773861   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:31.773866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:31.773909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:31.807713   79869 cri.go:89] found id: ""
	I0829 19:39:31.807739   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.807747   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:31.807753   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:31.807814   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:31.841846   79869 cri.go:89] found id: ""
	I0829 19:39:31.841874   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.841881   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:31.841887   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:31.841945   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:31.872713   79869 cri.go:89] found id: ""
	I0829 19:39:31.872736   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.872749   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:31.872760   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:31.872773   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:31.926299   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:31.926335   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:31.941134   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:31.941174   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:32.010600   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:32.010623   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:32.010638   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:32.091972   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:32.092008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.737021   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:34.236447   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:34.631695   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:34.644986   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:34.645051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:34.679788   79869 cri.go:89] found id: ""
	I0829 19:39:34.679816   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.679823   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:34.679832   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:34.679881   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:34.713113   79869 cri.go:89] found id: ""
	I0829 19:39:34.713139   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.713147   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:34.713152   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:34.713204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:34.745410   79869 cri.go:89] found id: ""
	I0829 19:39:34.745439   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.745451   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:34.745459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:34.745517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:34.779089   79869 cri.go:89] found id: ""
	I0829 19:39:34.779117   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.779125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:34.779132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:34.779179   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:34.810966   79869 cri.go:89] found id: ""
	I0829 19:39:34.810995   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.811004   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:34.811011   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:34.811075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:34.844859   79869 cri.go:89] found id: ""
	I0829 19:39:34.844894   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.844901   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:34.844907   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:34.844954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:34.876014   79869 cri.go:89] found id: ""
	I0829 19:39:34.876036   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.876044   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:34.876050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:34.876097   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:34.909383   79869 cri.go:89] found id: ""
	I0829 19:39:34.909412   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.909421   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:34.909429   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:34.909440   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:34.956841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:34.956875   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:34.969399   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:34.969423   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:35.034539   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:35.034574   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:35.034589   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:35.109702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:35.109743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:37.644897   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:37.658600   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:37.658665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:37.693604   79869 cri.go:89] found id: ""
	I0829 19:39:37.693638   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.693646   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:37.693655   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:37.693763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:37.727504   79869 cri.go:89] found id: ""
	I0829 19:39:37.727531   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.727538   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:37.727546   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:37.727598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:37.762755   79869 cri.go:89] found id: ""
	I0829 19:39:37.762778   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.762786   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:37.762792   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:37.762838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:37.799571   79869 cri.go:89] found id: ""
	I0829 19:39:37.799600   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.799611   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:37.799619   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:37.799669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:37.833599   79869 cri.go:89] found id: ""
	I0829 19:39:37.833632   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.833644   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:37.833651   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:37.833714   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:37.867877   79869 cri.go:89] found id: ""
	I0829 19:39:37.867901   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.867909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:37.867916   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:37.867968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:37.901439   79869 cri.go:89] found id: ""
	I0829 19:39:37.901467   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.901475   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:37.901480   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:37.901527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:37.936983   79869 cri.go:89] found id: ""
	I0829 19:39:37.937008   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.937016   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:37.937024   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:37.937035   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:38.016873   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:38.016917   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:38.052565   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:38.052605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:38.102174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:38.102210   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:38.115273   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:38.115298   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:38.186012   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:36.736406   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:39.235941   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:42.401382   79073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.237529155s)
	I0829 19:39:42.401460   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:42.428754   79073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:42.441896   79073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:42.456122   79073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:42.456147   79073 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:42.456190   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:42.471887   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:42.471947   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:42.483709   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:42.493000   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:42.493070   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:42.511916   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:42.520829   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:42.520891   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:42.530567   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:42.540199   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:42.540252   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:42.550058   79073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:42.596809   79073 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:39:42.596966   79073 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:42.706623   79073 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:42.706766   79073 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:42.706931   79073 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:39:42.717740   79073 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:40.686558   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:40.699240   79869 kubeadm.go:597] duration metric: took 4m4.589527641s to restartPrimaryControlPlane
	W0829 19:39:40.699313   79869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:40.699343   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:42.719760   79073 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:42.719862   79073 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:42.719929   79073 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:42.720023   79073 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:42.720079   79073 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:42.720144   79073 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:42.720193   79073 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:42.720248   79073 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:42.720315   79073 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:42.720386   79073 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:42.720459   79073 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:42.720496   79073 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:42.720555   79073 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:42.827328   79073 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:43.276222   79073 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:39:43.445594   79073 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:43.554811   79073 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:43.788184   79073 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:43.788791   79073 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:43.791871   79073 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:43.794448   79073 out.go:235]   - Booting up control plane ...
	I0829 19:39:43.794600   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:43.794702   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:43.794800   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:43.813894   79073 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:43.822272   79073 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:43.822357   79073 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:44.450706   79869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.75133723s)
	I0829 19:39:44.450782   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:44.464692   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:44.473894   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:44.483464   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:44.483483   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:44.483524   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:44.492228   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:44.492277   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:44.501349   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:44.510241   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:44.510295   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:44.519210   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.528256   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:44.528314   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.537658   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:44.546976   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:44.547027   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:44.556823   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:44.630397   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:39:44.630474   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:44.771729   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:44.771869   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:44.772018   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:39:44.944512   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:41.236034   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:43.236446   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:45.237605   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:44.947210   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:44.947320   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:44.947422   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:44.947540   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:44.947658   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:44.947781   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:44.947881   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:44.950819   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:44.950926   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:44.951022   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:44.951125   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:44.951174   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:44.951244   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:45.171698   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:45.287539   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:45.443576   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:45.594891   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:45.609143   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:45.610374   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:45.610440   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:45.746839   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:45.748753   79869 out.go:235]   - Booting up control plane ...
	I0829 19:39:45.748882   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:45.753577   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:45.754588   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:45.755463   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:45.760295   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:39:43.950283   79073 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:39:43.950458   79073 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:39:44.452956   79073 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.82915ms
	I0829 19:39:44.453068   79073 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:39:49.455000   79073 kubeadm.go:310] [api-check] The API server is healthy after 5.001789194s
	I0829 19:39:49.473145   79073 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:39:49.496760   79073 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:39:49.530950   79073 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:39:49.531148   79073 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-920571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:39:49.548546   79073 kubeadm.go:310] [bootstrap-token] Using token: bc4428.p8e3szrujohqnvnh
	I0829 19:39:47.735610   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:49.735833   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:49.549992   79073 out.go:235]   - Configuring RBAC rules ...
	I0829 19:39:49.550151   79073 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:39:49.558070   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:39:49.573758   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:39:49.579988   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:39:49.585250   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:39:49.592477   79073 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:39:49.863168   79073 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:39:50.294056   79073 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:39:50.862652   79073 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:39:50.863644   79073 kubeadm.go:310] 
	I0829 19:39:50.863717   79073 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:39:50.863729   79073 kubeadm.go:310] 
	I0829 19:39:50.863861   79073 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:39:50.863881   79073 kubeadm.go:310] 
	I0829 19:39:50.863917   79073 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:39:50.864019   79073 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:39:50.864101   79073 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:39:50.864111   79073 kubeadm.go:310] 
	I0829 19:39:50.864215   79073 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:39:50.864225   79073 kubeadm.go:310] 
	I0829 19:39:50.864298   79073 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:39:50.864312   79073 kubeadm.go:310] 
	I0829 19:39:50.864398   79073 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:39:50.864517   79073 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:39:50.864617   79073 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:39:50.864631   79073 kubeadm.go:310] 
	I0829 19:39:50.864743   79073 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:39:50.864856   79073 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:39:50.864869   79073 kubeadm.go:310] 
	I0829 19:39:50.864983   79073 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bc4428.p8e3szrujohqnvnh \
	I0829 19:39:50.865110   79073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:39:50.865142   79073 kubeadm.go:310] 	--control-plane 
	I0829 19:39:50.865152   79073 kubeadm.go:310] 
	I0829 19:39:50.865262   79073 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:39:50.865270   79073 kubeadm.go:310] 
	I0829 19:39:50.865370   79073 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bc4428.p8e3szrujohqnvnh \
	I0829 19:39:50.865527   79073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:39:50.866485   79073 kubeadm.go:310] W0829 19:39:42.565022    2509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:39:50.866852   79073 kubeadm.go:310] W0829 19:39:42.566073    2509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:39:50.866979   79073 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:39:50.867009   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:39:50.867020   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:39:50.868683   79073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:39:50.869952   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:39:50.880385   79073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:39:50.900028   79073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:39:50.900152   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:50.900187   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-920571 minikube.k8s.io/updated_at=2024_08_29T19_39_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=embed-certs-920571 minikube.k8s.io/primary=true
	I0829 19:39:51.090710   79073 ops.go:34] apiserver oom_adj: -16
	I0829 19:39:51.090865   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:51.591720   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:52.091579   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:52.591872   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:53.091671   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:53.591191   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.091640   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.591356   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.674005   79073 kubeadm.go:1113] duration metric: took 3.773916232s to wait for elevateKubeSystemPrivileges
	I0829 19:39:54.674046   79073 kubeadm.go:394] duration metric: took 4m58.910639816s to StartCluster
	I0829 19:39:54.674070   79073 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:39:54.674178   79073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:39:54.675789   79073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:39:54.676038   79073 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:39:54.676095   79073 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:39:54.676184   79073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-920571"
	I0829 19:39:54.676210   79073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-920571"
	I0829 19:39:54.676222   79073 addons.go:69] Setting metrics-server=true in profile "embed-certs-920571"
	I0829 19:39:54.676225   79073 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:39:54.676241   79073 addons.go:234] Setting addon metrics-server=true in "embed-certs-920571"
	I0829 19:39:54.676264   79073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-920571"
	I0829 19:39:54.676216   79073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-920571"
	W0829 19:39:54.676329   79073 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:39:54.676360   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	W0829 19:39:54.676392   79073 addons.go:243] addon metrics-server should already be in state true
	I0829 19:39:54.676455   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	I0829 19:39:54.676650   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676664   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676682   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.676684   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.676824   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676859   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.677794   79073 out.go:177] * Verifying Kubernetes components...
	I0829 19:39:54.679112   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:39:54.694669   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0829 19:39:54.694717   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0829 19:39:54.695090   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.695420   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.695532   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35861
	I0829 19:39:54.695640   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.695656   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.695925   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.695948   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.695951   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.696038   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.696266   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.696373   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.696392   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.696443   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.696600   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.696629   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.696745   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.697378   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.697413   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.702955   79073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-920571"
	W0829 19:39:54.702978   79073 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:39:54.703003   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	I0829 19:39:54.703347   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.703377   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.714194   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0829 19:39:54.714526   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0829 19:39:54.714735   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.714916   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.715368   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.715369   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.715389   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.715401   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.715712   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.715713   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.715944   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.715943   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.717556   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.717758   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.718972   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39097
	I0829 19:39:54.719212   79073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:39:54.719303   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.719212   79073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:39:52.236231   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:54.238843   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:54.719723   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.719735   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.720033   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.720307   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:39:54.720322   79073 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:39:54.720342   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.720533   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.720559   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.720952   79073 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:39:54.720975   79073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:39:54.720992   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.723754   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724174   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.724198   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724516   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.724684   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.724820   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.724879   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724973   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.725426   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.725466   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.725687   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.725827   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.725982   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.726117   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.743443   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37853
	I0829 19:39:54.744025   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.744590   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.744618   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.744908   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.745030   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.746560   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.746809   79073 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:39:54.746819   79073 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:39:54.746831   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.749422   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.749802   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.749827   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.749904   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.750058   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.750206   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.750320   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.902922   79073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:39:54.921933   79073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-920571" to be "Ready" ...
	I0829 19:39:54.936483   79073 node_ready.go:49] node "embed-certs-920571" has status "Ready":"True"
	I0829 19:39:54.936513   79073 node_ready.go:38] duration metric: took 14.542582ms for node "embed-certs-920571" to be "Ready" ...
	I0829 19:39:54.936524   79073 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:54.945389   79073 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:55.076394   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:39:55.076421   79073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:39:55.089140   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:39:55.096473   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:39:55.128207   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:39:55.128235   79073 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:39:55.186402   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:39:55.186429   79073 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:39:55.262731   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:39:55.548177   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.548217   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.548521   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.548542   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:55.548555   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.548564   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.548824   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.548857   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Closing plugin on server side
	I0829 19:39:55.548872   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:55.555956   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.555971   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.556210   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.556227   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.020289   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.020317   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.020610   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.020632   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.020642   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.020650   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.020912   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.020931   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.369749   79073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.106975723s)
	I0829 19:39:56.369809   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.369825   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.370119   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.370143   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.370154   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.370168   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.370407   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.370428   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.370440   79073 addons.go:475] Verifying addon metrics-server=true in "embed-certs-920571"
	I0829 19:39:56.373030   79073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 19:39:56.374322   79073 addons.go:510] duration metric: took 1.698226444s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 19:39:56.460329   79073 pod_ready.go:93] pod "etcd-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:39:56.460362   79073 pod_ready.go:82] duration metric: took 1.51494335s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:56.460375   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:58.467017   79073 pod_ready.go:93] pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:39:58.467040   79073 pod_ready.go:82] duration metric: took 2.006657264s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:58.467050   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:59.826535   79559 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.4017346s)
	I0829 19:39:59.826609   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:59.849311   79559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:59.859855   79559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:59.874237   79559 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:59.874262   79559 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:59.874315   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 19:39:59.883719   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:59.883785   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:59.893307   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 19:39:59.902478   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:59.902519   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:59.912664   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 19:39:59.932387   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:59.932443   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:59.948358   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 19:39:59.965812   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:59.965867   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:59.975437   79559 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:40:00.022167   79559 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:40:00.022347   79559 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:40:00.126622   79559 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:40:00.126777   79559 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:40:00.126914   79559 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:40:00.135123   79559 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:56.736712   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:59.235639   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:00.137714   79559 out.go:235]   - Generating certificates and keys ...
	I0829 19:40:00.137806   79559 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:40:00.137875   79559 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:40:00.138003   79559 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:40:00.138114   79559 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:40:00.138184   79559 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:40:00.138240   79559 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:40:00.138297   79559 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:40:00.138351   79559 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:40:00.138443   79559 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:40:00.138555   79559 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:40:00.138607   79559 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:40:00.138682   79559 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:40:00.368674   79559 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:40:00.454426   79559 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:40:00.576835   79559 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:40:00.650342   79559 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:40:01.038392   79559 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:40:01.038806   79559 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:40:01.041297   79559 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:40:01.043020   79559 out.go:235]   - Booting up control plane ...
	I0829 19:40:01.043127   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:40:01.043224   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:40:01.043501   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:40:01.062342   79559 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:40:01.068185   79559 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:40:01.068247   79559 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:40:01.202906   79559 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:40:01.203076   79559 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:40:01.705241   79559 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.336154ms
	I0829 19:40:01.705368   79559 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:40:00.476336   79073 pod_ready.go:103] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:02.973188   79073 pod_ready.go:103] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:03.473576   79073 pod_ready.go:93] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.473607   79073 pod_ready.go:82] duration metric: took 5.006550689s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.473616   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25cmq" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.478026   79073 pod_ready.go:93] pod "kube-proxy-25cmq" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.478045   79073 pod_ready.go:82] duration metric: took 4.423884ms for pod "kube-proxy-25cmq" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.478054   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.482541   79073 pod_ready.go:93] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.482560   79073 pod_ready.go:82] duration metric: took 4.499742ms for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.482566   79073 pod_ready.go:39] duration metric: took 8.54603076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:03.482581   79073 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:40:03.482623   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:40:03.502670   79073 api_server.go:72] duration metric: took 8.826595134s to wait for apiserver process to appear ...
	I0829 19:40:03.502695   79073 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:40:03.502718   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:40:03.507953   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0829 19:40:03.508948   79073 api_server.go:141] control plane version: v1.31.0
	I0829 19:40:03.508968   79073 api_server.go:131] duration metric: took 6.265433ms to wait for apiserver health ...
	I0829 19:40:03.508977   79073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:40:03.514929   79073 system_pods.go:59] 9 kube-system pods found
	I0829 19:40:03.514962   79073 system_pods.go:61] "coredns-6f6b679f8f-8qrn6" [af312704-4ea9-432d-85b2-67c59231187f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:03.514971   79073 system_pods.go:61] "coredns-6f6b679f8f-9f75n" [80f3b51d-fced-4cd9-8c43-2a4eea28e470] Running
	I0829 19:40:03.514979   79073 system_pods.go:61] "etcd-embed-certs-920571" [47af52f8-1d18-41fd-b013-e53fe813e4cf] Running
	I0829 19:40:03.514987   79073 system_pods.go:61] "kube-apiserver-embed-certs-920571" [1e9c3d77-55e1-4998-a2af-430254fde431] Running
	I0829 19:40:03.514994   79073 system_pods.go:61] "kube-controller-manager-embed-certs-920571" [08b33c9f-5858-41b7-a190-f34b611203ee] Running
	I0829 19:40:03.515000   79073 system_pods.go:61] "kube-proxy-25cmq" [35ecfe58-b448-4db0-b4cc-434422ec4ca6] Running
	I0829 19:40:03.515009   79073 system_pods.go:61] "kube-scheduler-embed-certs-920571" [ea37ea4f-390d-41d1-83b2-72fe1a09302b] Running
	I0829 19:40:03.515018   79073 system_pods.go:61] "metrics-server-6867b74b74-kb2c6" [8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:03.515027   79073 system_pods.go:61] "storage-provisioner" [741481e5-8e38-4522-a9df-4b36e6d5cf9c] Running
	I0829 19:40:03.515036   79073 system_pods.go:74] duration metric: took 6.052187ms to wait for pod list to return data ...
	I0829 19:40:03.515046   79073 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:40:03.518040   79073 default_sa.go:45] found service account: "default"
	I0829 19:40:03.518060   79073 default_sa.go:55] duration metric: took 3.004653ms for default service account to be created ...
	I0829 19:40:03.518069   79073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:40:03.523915   79073 system_pods.go:86] 9 kube-system pods found
	I0829 19:40:03.523942   79073 system_pods.go:89] "coredns-6f6b679f8f-8qrn6" [af312704-4ea9-432d-85b2-67c59231187f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:03.523949   79073 system_pods.go:89] "coredns-6f6b679f8f-9f75n" [80f3b51d-fced-4cd9-8c43-2a4eea28e470] Running
	I0829 19:40:03.523954   79073 system_pods.go:89] "etcd-embed-certs-920571" [47af52f8-1d18-41fd-b013-e53fe813e4cf] Running
	I0829 19:40:03.523958   79073 system_pods.go:89] "kube-apiserver-embed-certs-920571" [1e9c3d77-55e1-4998-a2af-430254fde431] Running
	I0829 19:40:03.523962   79073 system_pods.go:89] "kube-controller-manager-embed-certs-920571" [08b33c9f-5858-41b7-a190-f34b611203ee] Running
	I0829 19:40:03.523965   79073 system_pods.go:89] "kube-proxy-25cmq" [35ecfe58-b448-4db0-b4cc-434422ec4ca6] Running
	I0829 19:40:03.523968   79073 system_pods.go:89] "kube-scheduler-embed-certs-920571" [ea37ea4f-390d-41d1-83b2-72fe1a09302b] Running
	I0829 19:40:03.523973   79073 system_pods.go:89] "metrics-server-6867b74b74-kb2c6" [8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:03.523978   79073 system_pods.go:89] "storage-provisioner" [741481e5-8e38-4522-a9df-4b36e6d5cf9c] Running
	I0829 19:40:03.523986   79073 system_pods.go:126] duration metric: took 5.911567ms to wait for k8s-apps to be running ...
	I0829 19:40:03.523997   79073 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:40:03.524049   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:03.541502   79073 system_svc.go:56] duration metric: took 17.4955ms WaitForService to wait for kubelet
	I0829 19:40:03.541538   79073 kubeadm.go:582] duration metric: took 8.865466463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:40:03.541564   79073 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:40:03.544700   79073 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:40:03.544728   79073 node_conditions.go:123] node cpu capacity is 2
	I0829 19:40:03.544744   79073 node_conditions.go:105] duration metric: took 3.172559ms to run NodePressure ...
	I0829 19:40:03.544758   79073 start.go:241] waiting for startup goroutines ...
	I0829 19:40:03.544771   79073 start.go:246] waiting for cluster config update ...
	I0829 19:40:03.544789   79073 start.go:255] writing updated cluster config ...
	I0829 19:40:03.545136   79073 ssh_runner.go:195] Run: rm -f paused
	I0829 19:40:03.609413   79073 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:40:03.611490   79073 out.go:177] * Done! kubectl is now configured to use "embed-certs-920571" cluster and "default" namespace by default
	I0829 19:40:01.236210   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:03.236420   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:05.237141   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:06.707891   79559 kubeadm.go:310] [api-check] The API server is healthy after 5.002523987s
	I0829 19:40:06.719470   79559 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:40:06.733886   79559 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:40:06.759672   79559 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:40:06.759933   79559 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-672127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:40:06.771514   79559 kubeadm.go:310] [bootstrap-token] Using token: fzav4x.eeztheucmrep51py
	I0829 19:40:06.772887   79559 out.go:235]   - Configuring RBAC rules ...
	I0829 19:40:06.773014   79559 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:40:06.778644   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:40:06.792388   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:40:06.798560   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:40:06.801930   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:40:06.805767   79559 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:40:07.119680   79559 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:40:07.536660   79559 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:40:08.115528   79559 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:40:08.115550   79559 kubeadm.go:310] 
	I0829 19:40:08.115621   79559 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:40:08.115657   79559 kubeadm.go:310] 
	I0829 19:40:08.115780   79559 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:40:08.115802   79559 kubeadm.go:310] 
	I0829 19:40:08.115843   79559 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:40:08.115929   79559 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:40:08.116002   79559 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:40:08.116011   79559 kubeadm.go:310] 
	I0829 19:40:08.116087   79559 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:40:08.116099   79559 kubeadm.go:310] 
	I0829 19:40:08.116154   79559 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:40:08.116173   79559 kubeadm.go:310] 
	I0829 19:40:08.116247   79559 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:40:08.116386   79559 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:40:08.116477   79559 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:40:08.116487   79559 kubeadm.go:310] 
	I0829 19:40:08.116599   79559 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:40:08.116705   79559 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:40:08.116712   79559 kubeadm.go:310] 
	I0829 19:40:08.116779   79559 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token fzav4x.eeztheucmrep51py \
	I0829 19:40:08.116879   79559 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:40:08.116931   79559 kubeadm.go:310] 	--control-plane 
	I0829 19:40:08.116947   79559 kubeadm.go:310] 
	I0829 19:40:08.117048   79559 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:40:08.117058   79559 kubeadm.go:310] 
	I0829 19:40:08.117154   79559 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token fzav4x.eeztheucmrep51py \
	I0829 19:40:08.117270   79559 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:40:08.118512   79559 kubeadm.go:310] W0829 19:39:59.991394    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:08.118870   79559 kubeadm.go:310] W0829 19:39:59.992249    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:08.118981   79559 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:40:08.119009   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:40:08.119019   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:40:08.120832   79559 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:40:08.122029   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:40:08.133326   79559 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:40:08.150808   79559 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:40:08.150867   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:08.150884   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-672127 minikube.k8s.io/updated_at=2024_08_29T19_40_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=default-k8s-diff-port-672127 minikube.k8s.io/primary=true
	I0829 19:40:08.170047   79559 ops.go:34] apiserver oom_adj: -16
	I0829 19:40:08.350103   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:07.736119   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:10.236910   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:08.850762   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:09.350244   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:09.850222   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:10.350462   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:10.850237   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:11.350179   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:11.851033   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:12.351069   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:12.442963   79559 kubeadm.go:1113] duration metric: took 4.29215456s to wait for elevateKubeSystemPrivileges
	I0829 19:40:12.442998   79559 kubeadm.go:394] duration metric: took 4m56.544013459s to StartCluster
	I0829 19:40:12.443020   79559 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:40:12.443110   79559 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:40:12.444757   79559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:40:12.444998   79559 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:40:12.445061   79559 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:40:12.445138   79559 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445151   79559 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445173   79559 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.445181   79559 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:40:12.445179   79559 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-672127"
	I0829 19:40:12.445210   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.445210   79559 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:40:12.445266   79559 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445313   79559 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.445323   79559 addons.go:243] addon metrics-server should already be in state true
	I0829 19:40:12.445347   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.445625   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445658   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.445662   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445683   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.445737   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445775   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.446414   79559 out.go:177] * Verifying Kubernetes components...
	I0829 19:40:12.447652   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:40:12.461386   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0829 19:40:12.461436   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I0829 19:40:12.461805   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.461831   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.462057   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I0829 19:40:12.462324   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462327   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462341   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462347   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462373   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.462701   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.462798   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.462807   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462817   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462886   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.463109   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.463360   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.463392   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.463586   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.463607   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.465961   79559 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.465971   79559 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:40:12.465991   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.466309   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.466355   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.480989   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43705
	I0829 19:40:12.481216   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44477
	I0829 19:40:12.481407   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.481639   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.481843   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.481858   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.482222   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.482249   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.482291   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.482440   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.482576   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.482745   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.484681   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.485336   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.485664   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0829 19:40:12.486377   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.486547   79559 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:40:12.486922   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.486945   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.487310   79559 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:40:12.487586   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.488042   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:40:12.488060   79559 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:40:12.488081   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.488266   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.488307   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.488874   79559 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:40:12.488897   79559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:40:12.488914   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.492291   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.492699   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.492814   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.492844   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.493059   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.493128   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.493144   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.493259   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.493300   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.493432   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.493471   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.493822   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.493972   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.494114   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.505220   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0829 19:40:12.505690   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.506337   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.506363   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.506727   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.506899   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.508602   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.508796   79559 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:40:12.508810   79559 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:40:12.508829   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.511310   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.511660   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.511691   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.511815   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.511969   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.512110   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.512253   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.642279   79559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:40:12.666598   79559 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-672127" to be "Ready" ...
	I0829 19:40:12.682873   79559 node_ready.go:49] node "default-k8s-diff-port-672127" has status "Ready":"True"
	I0829 19:40:12.682895   79559 node_ready.go:38] duration metric: took 16.267143ms for node "default-k8s-diff-port-672127" to be "Ready" ...
	I0829 19:40:12.682904   79559 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:12.693451   79559 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:12.736525   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:40:12.736548   79559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:40:12.754764   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:40:12.754786   79559 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:40:12.806826   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:40:12.806856   79559 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:40:12.817164   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:40:12.837896   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:40:12.903140   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:40:14.124266   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.307063383s)
	I0829 19:40:14.124305   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.286373382s)
	I0829 19:40:14.124324   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124343   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124368   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124430   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.221258684s)
	I0829 19:40:14.124473   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124487   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124635   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124649   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124659   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124667   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124794   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.124813   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.124831   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124848   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124856   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124873   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124864   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124882   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124896   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124902   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124913   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124935   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.125356   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.125359   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.125381   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.126568   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.126637   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.126656   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.126704   79559 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-672127"
	I0829 19:40:14.193216   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.193238   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.193544   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.193562   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.195467   79559 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0829 19:40:12.237641   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:14.736679   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:14.196698   79559 addons.go:510] duration metric: took 1.751639165s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0829 19:40:14.720042   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:17.199482   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:17.235908   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.735901   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.199705   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.699776   79559 pod_ready.go:93] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:19.699801   79559 pod_ready.go:82] duration metric: took 7.006327617s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.699810   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.704240   79559 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:19.704261   79559 pod_ready.go:82] duration metric: took 4.444744ms for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.704269   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.710740   79559 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.710761   79559 pod_ready.go:82] duration metric: took 2.006486043s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.710770   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nqbn4" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.715111   79559 pod_ready.go:93] pod "kube-proxy-nqbn4" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.715134   79559 pod_ready.go:82] duration metric: took 4.357535ms for pod "kube-proxy-nqbn4" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.715146   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.719192   79559 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.719207   79559 pod_ready.go:82] duration metric: took 4.054087ms for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.719222   79559 pod_ready.go:39] duration metric: took 9.036299009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:21.719234   79559 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:40:21.719289   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:40:21.734507   79559 api_server.go:72] duration metric: took 9.289477227s to wait for apiserver process to appear ...
	I0829 19:40:21.734531   79559 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:40:21.734555   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:40:21.739963   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 200:
	ok
	I0829 19:40:21.740847   79559 api_server.go:141] control plane version: v1.31.0
	I0829 19:40:21.740865   79559 api_server.go:131] duration metric: took 6.327694ms to wait for apiserver health ...
	I0829 19:40:21.740872   79559 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:40:21.747609   79559 system_pods.go:59] 9 kube-system pods found
	I0829 19:40:21.747636   79559 system_pods.go:61] "coredns-6f6b679f8f-5p2vn" [8f7749c2-4cb3-4372-8144-46109f9b89b7] Running
	I0829 19:40:21.747643   79559 system_pods.go:61] "coredns-6f6b679f8f-dxbt5" [84373054-e72e-469a-bf2f-101943117851] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:21.747648   79559 system_pods.go:61] "etcd-default-k8s-diff-port-672127" [aacacabd-96ed-4635-b550-0bff36ec6c36] Running
	I0829 19:40:21.747654   79559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-672127" [6cf0f710-c339-438e-a572-2e8c498fa63a] Running
	I0829 19:40:21.747659   79559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-672127" [aaa934bc-c350-4318-9246-a2bb4b7d77f1] Running
	I0829 19:40:21.747662   79559 system_pods.go:61] "kube-proxy-nqbn4" [c5b48a1f-725b-45b7-8a3f-0df0f3371d2f] Running
	I0829 19:40:21.747665   79559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-672127" [fcf1ef19-acf4-41f2-a288-cfa8f978961b] Running
	I0829 19:40:21.747670   79559 system_pods.go:61] "metrics-server-6867b74b74-4p8qr" [8026c5c8-9f02-45a1-8cc8-9d485dc49cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:21.747674   79559 system_pods.go:61] "storage-provisioner" [5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00] Running
	I0829 19:40:21.747680   79559 system_pods.go:74] duration metric: took 6.803459ms to wait for pod list to return data ...
	I0829 19:40:21.747689   79559 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:40:21.750153   79559 default_sa.go:45] found service account: "default"
	I0829 19:40:21.750168   79559 default_sa.go:55] duration metric: took 2.474593ms for default service account to be created ...
	I0829 19:40:21.750175   79559 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:40:21.901186   79559 system_pods.go:86] 9 kube-system pods found
	I0829 19:40:21.901213   79559 system_pods.go:89] "coredns-6f6b679f8f-5p2vn" [8f7749c2-4cb3-4372-8144-46109f9b89b7] Running
	I0829 19:40:21.901219   79559 system_pods.go:89] "coredns-6f6b679f8f-dxbt5" [84373054-e72e-469a-bf2f-101943117851] Running
	I0829 19:40:21.901222   79559 system_pods.go:89] "etcd-default-k8s-diff-port-672127" [aacacabd-96ed-4635-b550-0bff36ec6c36] Running
	I0829 19:40:21.901227   79559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-672127" [6cf0f710-c339-438e-a572-2e8c498fa63a] Running
	I0829 19:40:21.901231   79559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-672127" [aaa934bc-c350-4318-9246-a2bb4b7d77f1] Running
	I0829 19:40:21.901235   79559 system_pods.go:89] "kube-proxy-nqbn4" [c5b48a1f-725b-45b7-8a3f-0df0f3371d2f] Running
	I0829 19:40:21.901238   79559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-672127" [fcf1ef19-acf4-41f2-a288-cfa8f978961b] Running
	I0829 19:40:21.901245   79559 system_pods.go:89] "metrics-server-6867b74b74-4p8qr" [8026c5c8-9f02-45a1-8cc8-9d485dc49cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:21.901249   79559 system_pods.go:89] "storage-provisioner" [5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00] Running
	I0829 19:40:21.901257   79559 system_pods.go:126] duration metric: took 151.07798ms to wait for k8s-apps to be running ...
	I0829 19:40:21.901263   79559 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:40:21.901306   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:21.916730   79559 system_svc.go:56] duration metric: took 15.457902ms WaitForService to wait for kubelet
	I0829 19:40:21.916757   79559 kubeadm.go:582] duration metric: took 9.471732105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:40:21.916773   79559 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:40:22.099083   79559 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:40:22.099119   79559 node_conditions.go:123] node cpu capacity is 2
	I0829 19:40:22.099133   79559 node_conditions.go:105] duration metric: took 182.354927ms to run NodePressure ...
	I0829 19:40:22.099147   79559 start.go:241] waiting for startup goroutines ...
	I0829 19:40:22.099156   79559 start.go:246] waiting for cluster config update ...
	I0829 19:40:22.099168   79559 start.go:255] writing updated cluster config ...
	I0829 19:40:22.099536   79559 ssh_runner.go:195] Run: rm -f paused
	I0829 19:40:22.148307   79559 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:40:22.150361   79559 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-672127" cluster and "default" namespace by default
	I0829 19:40:21.736121   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:23.229905   78865 pod_ready.go:82] duration metric: took 4m0.000141946s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" ...
	E0829 19:40:23.229943   78865 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:40:23.229991   78865 pod_ready.go:39] duration metric: took 4m10.70989222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:23.230021   78865 kubeadm.go:597] duration metric: took 4m18.600330645s to restartPrimaryControlPlane
	W0829 19:40:23.230078   78865 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:40:23.230136   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:40:25.762989   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:40:25.763689   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:25.763863   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:30.764613   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:30.764821   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:40.765517   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:40.765752   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:49.374221   78865 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.144057875s)
	I0829 19:40:49.374297   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:49.389586   78865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:40:49.399146   78865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:40:49.408450   78865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:40:49.408469   78865 kubeadm.go:157] found existing configuration files:
	
	I0829 19:40:49.408521   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:40:49.417651   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:40:49.417706   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:40:49.427073   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:40:49.435307   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:40:49.435356   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:40:49.443720   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:40:49.452437   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:40:49.452493   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:40:49.461133   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:40:49.469515   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:40:49.469564   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:40:49.478224   78865 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:40:49.523193   78865 kubeadm.go:310] W0829 19:40:49.504457    3026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:49.523801   78865 kubeadm.go:310] W0829 19:40:49.505165    3026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:49.640221   78865 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:40:57.429227   78865 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:40:57.429293   78865 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:40:57.429396   78865 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:40:57.429536   78865 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:40:57.429665   78865 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:40:57.429757   78865 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:40:57.431358   78865 out.go:235]   - Generating certificates and keys ...
	I0829 19:40:57.431434   78865 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:40:57.431485   78865 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:40:57.431566   78865 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:40:57.431640   78865 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:40:57.431711   78865 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:40:57.431786   78865 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:40:57.431847   78865 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:40:57.431893   78865 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:40:57.431956   78865 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:40:57.432013   78865 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:40:57.432052   78865 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:40:57.432109   78865 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:40:57.432186   78865 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:40:57.432275   78865 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:40:57.432352   78865 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:40:57.432444   78865 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:40:57.432518   78865 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:40:57.432595   78865 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:40:57.432648   78865 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:40:57.434057   78865 out.go:235]   - Booting up control plane ...
	I0829 19:40:57.434161   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:40:57.434245   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:40:57.434298   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:40:57.434396   78865 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:40:57.434475   78865 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:40:57.434509   78865 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:40:57.434687   78865 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:40:57.434772   78865 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:40:57.434824   78865 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 509.075612ms
	I0829 19:40:57.434887   78865 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:40:57.434932   78865 kubeadm.go:310] [api-check] The API server is healthy after 5.002117161s
	I0829 19:40:57.435094   78865 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:40:57.435232   78865 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:40:57.435284   78865 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:40:57.435429   78865 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-690795 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:40:57.435472   78865 kubeadm.go:310] [bootstrap-token] Using token: adxyev.rcmf9k5ok190h0g1
	I0829 19:40:57.436846   78865 out.go:235]   - Configuring RBAC rules ...
	I0829 19:40:57.436936   78865 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:40:57.437001   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:40:57.437113   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:40:57.437214   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:40:57.437307   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:40:57.437380   78865 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:40:57.437480   78865 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:40:57.437528   78865 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:40:57.437577   78865 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:40:57.437583   78865 kubeadm.go:310] 
	I0829 19:40:57.437635   78865 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:40:57.437641   78865 kubeadm.go:310] 
	I0829 19:40:57.437704   78865 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:40:57.437710   78865 kubeadm.go:310] 
	I0829 19:40:57.437744   78865 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:40:57.437807   78865 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:40:57.437851   78865 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:40:57.437857   78865 kubeadm.go:310] 
	I0829 19:40:57.437907   78865 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:40:57.437913   78865 kubeadm.go:310] 
	I0829 19:40:57.437951   78865 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:40:57.437957   78865 kubeadm.go:310] 
	I0829 19:40:57.438000   78865 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:40:57.438107   78865 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:40:57.438188   78865 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:40:57.438200   78865 kubeadm.go:310] 
	I0829 19:40:57.438289   78865 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:40:57.438359   78865 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:40:57.438364   78865 kubeadm.go:310] 
	I0829 19:40:57.438429   78865 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token adxyev.rcmf9k5ok190h0g1 \
	I0829 19:40:57.438507   78865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:40:57.438525   78865 kubeadm.go:310] 	--control-plane 
	I0829 19:40:57.438534   78865 kubeadm.go:310] 
	I0829 19:40:57.438611   78865 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:40:57.438621   78865 kubeadm.go:310] 
	I0829 19:40:57.438688   78865 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token adxyev.rcmf9k5ok190h0g1 \
	I0829 19:40:57.438791   78865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:40:57.438814   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:40:57.438825   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:40:57.440836   78865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:40:57.442065   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:40:57.452700   78865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:40:57.469549   78865 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:40:57.469621   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:57.469656   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-690795 minikube.k8s.io/updated_at=2024_08_29T19_40_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=no-preload-690795 minikube.k8s.io/primary=true
	I0829 19:40:57.503411   78865 ops.go:34] apiserver oom_adj: -16
	I0829 19:40:57.648807   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:58.149067   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:58.649770   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:59.148932   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:59.649114   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:00.149833   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:00.649474   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.149795   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.649154   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.745084   78865 kubeadm.go:1113] duration metric: took 4.275525047s to wait for elevateKubeSystemPrivileges
	I0829 19:41:01.745117   78865 kubeadm.go:394] duration metric: took 4m57.169926854s to StartCluster
	I0829 19:41:01.745134   78865 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:41:01.745209   78865 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:41:01.746775   78865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:41:01.747005   78865 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:41:01.747062   78865 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:41:01.747155   78865 addons.go:69] Setting storage-provisioner=true in profile "no-preload-690795"
	I0829 19:41:01.747175   78865 addons.go:69] Setting default-storageclass=true in profile "no-preload-690795"
	I0829 19:41:01.747189   78865 addons.go:234] Setting addon storage-provisioner=true in "no-preload-690795"
	W0829 19:41:01.747199   78865 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:41:01.747200   78865 addons.go:69] Setting metrics-server=true in profile "no-preload-690795"
	I0829 19:41:01.747240   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.747246   78865 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:41:01.747243   78865 addons.go:234] Setting addon metrics-server=true in "no-preload-690795"
	W0829 19:41:01.747307   78865 addons.go:243] addon metrics-server should already be in state true
	I0829 19:41:01.747333   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.747206   78865 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-690795"
	I0829 19:41:01.747652   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747670   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747678   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.747702   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.747780   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747810   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.748790   78865 out.go:177] * Verifying Kubernetes components...
	I0829 19:41:01.750069   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:41:01.764006   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0829 19:41:01.765511   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.766194   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.766218   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.766287   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43301
	I0829 19:41:01.766670   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.766694   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.766912   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.766965   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I0829 19:41:01.767129   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.767149   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.767304   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.767506   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.767737   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.767755   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.768073   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.768202   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.768241   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.768615   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.768646   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.771065   78865 addons.go:234] Setting addon default-storageclass=true in "no-preload-690795"
	W0829 19:41:01.771088   78865 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:41:01.771117   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.771415   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.771441   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.787271   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I0829 19:41:01.788003   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.788577   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.788606   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.788885   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0829 19:41:01.789065   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0829 19:41:01.789073   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.789361   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.789716   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.789754   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.789754   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.789774   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.790084   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.790243   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.790319   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.791018   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.791029   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.791393   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.791721   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.792306   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.793557   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.793806   78865 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:41:01.794942   78865 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:41:01.795033   78865 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:41:01.795049   78865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:41:01.795067   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.796032   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:41:01.796048   78865 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:41:01.796065   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.799646   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800163   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800618   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.800644   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800826   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.800843   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800941   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.801043   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.801114   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.801184   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.801239   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.801363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.801367   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.801484   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.807187   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36385
	I0829 19:41:01.807604   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.808056   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.808070   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.808471   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.808671   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.810374   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.810569   78865 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:41:01.810579   78865 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:41:01.810591   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.813314   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.813766   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.813776   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.814029   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.814187   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.814292   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.814379   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.963011   78865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:41:01.981935   78865 node_ready.go:35] waiting up to 6m0s for node "no-preload-690795" to be "Ready" ...
	I0829 19:41:01.998366   78865 node_ready.go:49] node "no-preload-690795" has status "Ready":"True"
	I0829 19:41:01.998389   78865 node_ready.go:38] duration metric: took 16.418591ms for node "no-preload-690795" to be "Ready" ...
	I0829 19:41:01.998398   78865 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:41:02.005811   78865 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:02.053495   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:41:02.197657   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:41:02.239853   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:41:02.239877   78865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:41:02.270764   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:41:02.270789   78865 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:41:02.327819   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:41:02.327853   78865 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:41:02.380812   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.380843   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.381117   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.381191   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:02.381209   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.381217   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.381432   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.381444   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:02.384211   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:41:02.387013   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.387027   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.387286   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:02.387333   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.387345   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.027502   78865 pod_ready.go:93] pod "etcd-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:03.027535   78865 pod_ready.go:82] duration metric: took 1.02170157s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:03.027550   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:03.410428   78865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212715771s)
	I0829 19:41:03.410485   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.410503   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.412586   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.412590   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.412614   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.412625   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.412632   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.412926   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.412947   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.412954   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.587379   78865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.203116606s)
	I0829 19:41:03.587437   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.587452   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.587770   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.587840   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.587859   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.587874   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.587878   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.588185   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.588206   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.588218   78865 addons.go:475] Verifying addon metrics-server=true in "no-preload-690795"
	I0829 19:41:03.588192   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.590131   78865 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 19:41:00.767158   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:00.767429   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:03.591280   78865 addons.go:510] duration metric: took 1.844219817s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 19:41:05.035315   78865 pod_ready.go:103] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"False"
	I0829 19:41:06.033037   78865 pod_ready.go:93] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:06.033060   78865 pod_ready.go:82] duration metric: took 3.005501862s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:06.033068   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.039035   78865 pod_ready.go:93] pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.039059   78865 pod_ready.go:82] duration metric: took 1.005984859s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.039068   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p7zvh" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.043096   78865 pod_ready.go:93] pod "kube-proxy-p7zvh" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.043116   78865 pod_ready.go:82] duration metric: took 4.042896ms for pod "kube-proxy-p7zvh" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.043125   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.046934   78865 pod_ready.go:93] pod "kube-scheduler-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.046957   78865 pod_ready.go:82] duration metric: took 3.826283ms for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.046966   78865 pod_ready.go:39] duration metric: took 5.048560252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:41:07.046983   78865 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:41:07.047036   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:41:07.062234   78865 api_server.go:72] duration metric: took 5.315200823s to wait for apiserver process to appear ...
	I0829 19:41:07.062256   78865 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:41:07.062277   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:41:07.068022   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0829 19:41:07.069170   78865 api_server.go:141] control plane version: v1.31.0
	I0829 19:41:07.069190   78865 api_server.go:131] duration metric: took 6.927858ms to wait for apiserver health ...
	I0829 19:41:07.069198   78865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:41:07.075909   78865 system_pods.go:59] 9 kube-system pods found
	I0829 19:41:07.075932   78865 system_pods.go:61] "coredns-6f6b679f8f-wr7bq" [ac054ab5-3a0e-433e-add6-5817ce6f1c27] Running
	I0829 19:41:07.075939   78865 system_pods.go:61] "coredns-6f6b679f8f-xbfb6" [a94d281f-1fdb-4e33-a060-17cd5981462c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:41:07.075944   78865 system_pods.go:61] "etcd-no-preload-690795" [a13c0d5b-fb99-4c57-8401-b0580a0e97a7] Running
	I0829 19:41:07.075949   78865 system_pods.go:61] "kube-apiserver-no-preload-690795" [1ec66a25-5b6d-4b81-8c44-b4dffad1ec12] Running
	I0829 19:41:07.075953   78865 system_pods.go:61] "kube-controller-manager-no-preload-690795" [d670e6eb-006f-4fce-a72f-f266fda72ccc] Running
	I0829 19:41:07.075956   78865 system_pods.go:61] "kube-proxy-p7zvh" [14f4576d-3d3e-4848-9350-3348293318aa] Running
	I0829 19:41:07.075960   78865 system_pods.go:61] "kube-scheduler-no-preload-690795" [1b4dfa7f-b043-4a38-a250-2e51eabf1b33] Running
	I0829 19:41:07.075964   78865 system_pods.go:61] "metrics-server-6867b74b74-shs88" [cd53f408-7f8a-40ae-93f3-7a00c8ae6646] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:41:07.075968   78865 system_pods.go:61] "storage-provisioner" [df10c563-06d8-48f8-a6e4-35837195a25d] Running
	I0829 19:41:07.075975   78865 system_pods.go:74] duration metric: took 6.771333ms to wait for pod list to return data ...
	I0829 19:41:07.075985   78865 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:41:07.079235   78865 default_sa.go:45] found service account: "default"
	I0829 19:41:07.079255   78865 default_sa.go:55] duration metric: took 3.264804ms for default service account to be created ...
	I0829 19:41:07.079263   78865 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:41:07.083981   78865 system_pods.go:86] 9 kube-system pods found
	I0829 19:41:07.084006   78865 system_pods.go:89] "coredns-6f6b679f8f-wr7bq" [ac054ab5-3a0e-433e-add6-5817ce6f1c27] Running
	I0829 19:41:07.084014   78865 system_pods.go:89] "coredns-6f6b679f8f-xbfb6" [a94d281f-1fdb-4e33-a060-17cd5981462c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:41:07.084019   78865 system_pods.go:89] "etcd-no-preload-690795" [a13c0d5b-fb99-4c57-8401-b0580a0e97a7] Running
	I0829 19:41:07.084025   78865 system_pods.go:89] "kube-apiserver-no-preload-690795" [1ec66a25-5b6d-4b81-8c44-b4dffad1ec12] Running
	I0829 19:41:07.084029   78865 system_pods.go:89] "kube-controller-manager-no-preload-690795" [d670e6eb-006f-4fce-a72f-f266fda72ccc] Running
	I0829 19:41:07.084032   78865 system_pods.go:89] "kube-proxy-p7zvh" [14f4576d-3d3e-4848-9350-3348293318aa] Running
	I0829 19:41:07.084037   78865 system_pods.go:89] "kube-scheduler-no-preload-690795" [1b4dfa7f-b043-4a38-a250-2e51eabf1b33] Running
	I0829 19:41:07.084042   78865 system_pods.go:89] "metrics-server-6867b74b74-shs88" [cd53f408-7f8a-40ae-93f3-7a00c8ae6646] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:41:07.084045   78865 system_pods.go:89] "storage-provisioner" [df10c563-06d8-48f8-a6e4-35837195a25d] Running
	I0829 19:41:07.084052   78865 system_pods.go:126] duration metric: took 4.784448ms to wait for k8s-apps to be running ...
	I0829 19:41:07.084062   78865 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:41:07.084104   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:07.098513   78865 system_svc.go:56] duration metric: took 14.440998ms WaitForService to wait for kubelet
	I0829 19:41:07.098551   78865 kubeadm.go:582] duration metric: took 5.351518255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:41:07.098574   78865 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:41:07.231160   78865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:41:07.231189   78865 node_conditions.go:123] node cpu capacity is 2
	I0829 19:41:07.231200   78865 node_conditions.go:105] duration metric: took 132.62068ms to run NodePressure ...
	I0829 19:41:07.231209   78865 start.go:241] waiting for startup goroutines ...
	I0829 19:41:07.231216   78865 start.go:246] waiting for cluster config update ...
	I0829 19:41:07.231225   78865 start.go:255] writing updated cluster config ...
	I0829 19:41:07.231503   78865 ssh_runner.go:195] Run: rm -f paused
	I0829 19:41:07.283204   78865 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:41:07.284751   78865 out.go:177] * Done! kubectl is now configured to use "no-preload-690795" cluster and "default" namespace by default
	I0829 19:41:40.770350   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:40.770652   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:40.770684   79869 kubeadm.go:310] 
	I0829 19:41:40.770740   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:41:40.770802   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:41:40.770818   79869 kubeadm.go:310] 
	I0829 19:41:40.770862   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:41:40.770917   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:41:40.771047   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:41:40.771057   79869 kubeadm.go:310] 
	I0829 19:41:40.771202   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:41:40.771254   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:41:40.771309   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:41:40.771320   79869 kubeadm.go:310] 
	I0829 19:41:40.771447   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:41:40.771565   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:41:40.771576   79869 kubeadm.go:310] 
	I0829 19:41:40.771675   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:41:40.771776   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:41:40.771900   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:41:40.771997   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:41:40.772010   79869 kubeadm.go:310] 
	I0829 19:41:40.772984   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:41:40.773093   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:41:40.773213   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 19:41:40.773353   79869 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 19:41:40.773398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:41:41.224263   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:41.239310   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:41:41.249121   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:41:41.249142   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:41:41.249195   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:41:41.258534   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:41:41.258591   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:41:41.267814   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:41:41.276813   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:41:41.276871   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:41:41.286937   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.296364   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:41:41.296435   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.306574   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:41:41.315824   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:41:41.315899   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:41:41.325290   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:41:41.389915   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:41:41.390071   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:41:41.529956   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:41:41.530108   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:41:41.530226   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:41:41.709310   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:41:41.711945   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:41:41.712051   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:41:41.712127   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:41:41.712225   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:41:41.712308   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:41:41.712402   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:41:41.712466   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:41:41.712551   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:41:41.712622   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:41:41.712727   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:41:41.712831   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:41:41.712865   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:41:41.712912   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:41:41.790778   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:41:41.993240   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:41:42.180389   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:41:42.248561   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:41:42.272297   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:41:42.273147   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:41:42.273249   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:41:42.421783   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:41:42.424669   79869 out.go:235]   - Booting up control plane ...
	I0829 19:41:42.424781   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:41:42.434145   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:41:42.437026   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:41:42.437823   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:41:42.441047   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:42:22.439545   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:42:22.439898   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:22.440093   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:27.439985   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:27.440226   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:37.440067   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:37.440333   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:57.439710   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:57.439891   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.439862   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:43:37.440057   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.440081   79869 kubeadm.go:310] 
	I0829 19:43:37.440118   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:43:37.440173   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:43:37.440181   79869 kubeadm.go:310] 
	I0829 19:43:37.440213   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:43:37.440265   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:43:37.440376   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:43:37.440384   79869 kubeadm.go:310] 
	I0829 19:43:37.440503   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:43:37.440551   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:43:37.440605   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:43:37.440618   79869 kubeadm.go:310] 
	I0829 19:43:37.440763   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:43:37.440893   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:43:37.440904   79869 kubeadm.go:310] 
	I0829 19:43:37.441013   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:43:37.441146   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:43:37.441255   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:43:37.441367   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:43:37.441380   79869 kubeadm.go:310] 
	I0829 19:43:37.441848   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:43:37.441958   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:43:37.442039   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 19:43:37.442126   79869 kubeadm.go:394] duration metric: took 8m1.388269811s to StartCluster
	I0829 19:43:37.442174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:43:37.442230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:43:37.483512   79869 cri.go:89] found id: ""
	I0829 19:43:37.483544   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.483554   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:43:37.483560   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:43:37.483617   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:43:37.518325   79869 cri.go:89] found id: ""
	I0829 19:43:37.518353   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.518361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:43:37.518368   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:43:37.518426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:43:37.554541   79869 cri.go:89] found id: ""
	I0829 19:43:37.554563   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.554574   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:43:37.554582   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:43:37.554650   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:43:37.589041   79869 cri.go:89] found id: ""
	I0829 19:43:37.589069   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.589076   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:43:37.589083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:43:37.589132   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:43:37.624451   79869 cri.go:89] found id: ""
	I0829 19:43:37.624479   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.624491   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:43:37.624499   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:43:37.624554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:43:37.660162   79869 cri.go:89] found id: ""
	I0829 19:43:37.660186   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.660193   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:43:37.660199   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:43:37.660249   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:43:37.696806   79869 cri.go:89] found id: ""
	I0829 19:43:37.696836   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.696844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:43:37.696850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:43:37.696898   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:43:37.732828   79869 cri.go:89] found id: ""
	I0829 19:43:37.732851   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.732860   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:43:37.732871   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:43:37.732887   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:43:37.772219   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:43:37.772247   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:43:37.823967   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:43:37.824003   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:43:37.838884   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:43:37.838906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:43:37.915184   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:43:37.915206   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:43:37.915222   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0829 19:43:38.020759   79869 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 19:43:38.020827   79869 out.go:270] * 
	W0829 19:43:38.020882   79869 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.020897   79869 out.go:270] * 
	W0829 19:43:38.021777   79869 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:43:38.024855   79869 out.go:201] 
	W0829 19:43:38.025860   79869 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.025905   79869 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 19:43:38.025936   79869 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 19:43:38.027175   79869 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.130544142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961163130516925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f6632fe-0896-4ca1-a2de-b4b5ba823c06 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.131157930Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64dc90dd-877f-4f9e-b37e-9f02b74fbd39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.131231766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64dc90dd-877f-4f9e-b37e-9f02b74fbd39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.131270034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=64dc90dd-877f-4f9e-b37e-9f02b74fbd39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.161898280Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f6f5528-bc78-4dfe-9ed3-aac8e481a4ed name=/runtime.v1.RuntimeService/Version
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.162007515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f6f5528-bc78-4dfe-9ed3-aac8e481a4ed name=/runtime.v1.RuntimeService/Version
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.162900425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21f4b56a-8708-4002-9c47-cab123efef4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.163295929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961163163277182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21f4b56a-8708-4002-9c47-cab123efef4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.163763657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a997087-5185-4416-b2ad-2e7d42bef60e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.163872568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a997087-5185-4416-b2ad-2e7d42bef60e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.163920614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7a997087-5185-4416-b2ad-2e7d42bef60e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.195174978Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=236fc65f-f674-4c4a-a781-596c59d8fc9a name=/runtime.v1.RuntimeService/Version
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.195279537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=236fc65f-f674-4c4a-a781-596c59d8fc9a name=/runtime.v1.RuntimeService/Version
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.196450227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ed44214-fa8c-407c-ab3e-6c17f01bb04f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.196973550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961163196939735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ed44214-fa8c-407c-ab3e-6c17f01bb04f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.197497554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de25c25e-152b-43cd-8d14-fdbaf1db6106 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.197567309Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de25c25e-152b-43cd-8d14-fdbaf1db6106 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.197599719Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=de25c25e-152b-43cd-8d14-fdbaf1db6106 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.227905123Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=981f28ee-5eeb-4c24-8b99-f2dda6392ece name=/runtime.v1.RuntimeService/Version
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.228003192Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=981f28ee-5eeb-4c24-8b99-f2dda6392ece name=/runtime.v1.RuntimeService/Version
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.229031729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50a3c67d-fffc-4a1a-a6cd-88da09168811 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.229476170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961163229446322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50a3c67d-fffc-4a1a-a6cd-88da09168811 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.229960062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b392e25-d881-4abf-af8b-252f8f7f83c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.230028918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b392e25-d881-4abf-af8b-252f8f7f83c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:52:43 old-k8s-version-467349 crio[629]: time="2024-08-29 19:52:43.230060728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9b392e25-d881-4abf-af8b-252f8f7f83c9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug29 19:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052596] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039104] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.969920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.984718] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.595405] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.892866] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.060569] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055946] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.216571] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.121311] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.242095] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.546376] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.055907] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.984348] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[ +14.158991] kauditd_printk_skb: 46 callbacks suppressed
	[Aug29 19:39] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[Aug29 19:41] systemd-fstab-generator[5395]: Ignoring "noauto" option for root device
	[  +0.067610] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:52:43 up 17 min,  0 users,  load average: 0.03, 0.05, 0.03
	Linux old-k8s-version-467349 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]: net.(*sysDialer).dialSingle(0xc000b0f380, 0x4f7fe40, 0xc000204d80, 0x4f1ff00, 0xc000c96240, 0x0, 0x0, 0x0, 0x0)
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]: net.(*sysDialer).dialSerial(0xc000b0f380, 0x4f7fe40, 0xc000204d80, 0xc0009ee4d0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]:         /usr/local/go/src/net/dial.go:548 +0x152
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]: net.(*Dialer).DialContext(0xc00016a5a0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00077abd0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b06ea0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00077abd0, 0x24, 0x60, 0x7f1ddfdc7c10, 0x118, ...)
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]: net/http.(*Transport).dial(0xc000894000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00077abd0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]: net/http.(*Transport).dialConn(0xc000894000, 0x4f7fe00, 0xc000052030, 0x0, 0xc0009aa300, 0x5, 0xc00077abd0, 0x24, 0x0, 0xc000897c20, ...)
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]: net/http.(*Transport).dialConnFor(0xc000894000, 0xc000befa20)
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]: created by net/http.(*Transport).queueForDial
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 29 19:52:37 old-k8s-version-467349 kubelet[6568]: E0829 19:52:37.898751    6568 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dold-k8s-version-467349&limit=500&resourceVersion=0": dial tcp 192.168.72.112:8443: connect: connection refused
	Aug 29 19:52:38 old-k8s-version-467349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Aug 29 19:52:38 old-k8s-version-467349 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 29 19:52:38 old-k8s-version-467349 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 29 19:52:38 old-k8s-version-467349 kubelet[6578]: I0829 19:52:38.633221    6578 server.go:416] Version: v1.20.0
	Aug 29 19:52:38 old-k8s-version-467349 kubelet[6578]: I0829 19:52:38.633628    6578 server.go:837] Client rotation is on, will bootstrap in background
	Aug 29 19:52:38 old-k8s-version-467349 kubelet[6578]: I0829 19:52:38.635678    6578 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 29 19:52:38 old-k8s-version-467349 kubelet[6578]: I0829 19:52:38.637017    6578 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 29 19:52:38 old-k8s-version-467349 kubelet[6578]: W0829 19:52:38.637051    6578 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467349 -n old-k8s-version-467349
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 2 (212.937058ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-467349" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (437s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-920571 -n embed-certs-920571
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-29 19:56:23.100372518 +0000 UTC m=+6655.821165288
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-920571 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-920571 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.449µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-920571 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-920571 -n embed-certs-920571
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-920571 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-920571 logs -n 25: (1.21350757s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-831934 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | disable-driver-mounts-831934                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:28 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-690795             | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-920571            | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-672127  | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC | 29 Aug 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC |                     |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-690795                  | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC | 29 Aug 24 19:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-920571                 | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC | 29 Aug 24 19:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467349        | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-672127       | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:40 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467349             | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:55 UTC | 29 Aug 24 19:55 UTC |
	| start   | -p newest-cni-371258 --memory=2200 --alsologtostderr   | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:55 UTC | 29 Aug 24 19:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:55 UTC | 29 Aug 24 19:55 UTC |
	| addons  | enable metrics-server -p newest-cni-371258             | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:56 UTC | 29 Aug 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-371258                                   | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:56 UTC | 29 Aug 24 19:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-371258                  | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:56 UTC | 29 Aug 24 19:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-371258 --memory=2200 --alsologtostderr   | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:56 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:56:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:56:21.939651   86792 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:56:21.939759   86792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:56:21.939767   86792 out.go:358] Setting ErrFile to fd 2...
	I0829 19:56:21.939771   86792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:56:21.939941   86792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:56:21.940451   86792 out.go:352] Setting JSON to false
	I0829 19:56:21.941331   86792 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9529,"bootTime":1724951853,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:56:21.941388   86792 start.go:139] virtualization: kvm guest
	I0829 19:56:21.943333   86792 out.go:177] * [newest-cni-371258] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:56:21.944531   86792 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:56:21.944534   86792 notify.go:220] Checking for updates...
	I0829 19:56:21.945739   86792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:56:21.946958   86792 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:56:21.948111   86792 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:56:21.949231   86792 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:56:21.950277   86792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:56:21.951705   86792 config.go:182] Loaded profile config "newest-cni-371258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:56:21.952083   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:21.952141   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:21.967068   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36285
	I0829 19:56:21.967447   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:21.967935   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:21.967956   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:21.968283   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:21.968490   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:21.968766   86792 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:56:21.969152   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:21.969195   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:21.984712   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40821
	I0829 19:56:21.985069   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:21.985505   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:21.985525   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:21.985878   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:21.986072   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:22.022584   86792 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:56:22.023818   86792 start.go:297] selected driver: kvm2
	I0829 19:56:22.023830   86792 start.go:901] validating driver "kvm2" against &{Name:newest-cni-371258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-371258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:56:22.023944   86792 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:56:22.024595   86792 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:56:22.024669   86792 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:56:22.040730   86792 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:56:22.041118   86792 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0829 19:56:22.041181   86792 cni.go:84] Creating CNI manager for ""
	I0829 19:56:22.041195   86792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:56:22.041231   86792 start.go:340] cluster config:
	{Name:newest-cni-371258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-371258 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:56:22.041356   86792 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:56:22.042922   86792 out.go:177] * Starting "newest-cni-371258" primary control-plane node in "newest-cni-371258" cluster
	I0829 19:56:22.043915   86792 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:56:22.043954   86792 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:56:22.043961   86792 cache.go:56] Caching tarball of preloaded images
	I0829 19:56:22.044018   86792 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:56:22.044029   86792 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:56:22.044118   86792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258/config.json ...
	I0829 19:56:22.044304   86792 start.go:360] acquireMachinesLock for newest-cni-371258: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:56:22.044342   86792 start.go:364] duration metric: took 20.363µs to acquireMachinesLock for "newest-cni-371258"
	I0829 19:56:22.044356   86792 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:56:22.044361   86792 fix.go:54] fixHost starting: 
	I0829 19:56:22.044613   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:22.044641   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:22.059859   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I0829 19:56:22.060252   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:22.060690   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:22.060707   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:22.061011   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:22.061168   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:22.061316   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetState
	I0829 19:56:22.062904   86792 fix.go:112] recreateIfNeeded on newest-cni-371258: state=Stopped err=<nil>
	I0829 19:56:22.062928   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	W0829 19:56:22.063084   86792 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:56:22.065720   86792 out.go:177] * Restarting existing kvm2 VM for "newest-cni-371258" ...
	
	
	==> CRI-O <==
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.716692230Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e850b6c3-794b-4d41-a9d5-201e7340e1df name=/runtime.v1.RuntimeService/Version
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.717829984Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68577e81-eb3b-41a1-a845-9b7e5a2b29ca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.718582250Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961383718556476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68577e81-eb3b-41a1-a845-9b7e5a2b29ca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.719066705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=208b6daa-9318-4b93-9883-7c1646240b97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.719137619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=208b6daa-9318-4b93-9883-7c1646240b97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.719386318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c8cd20fb8775b46859df2e4a9f52f38ebbb779f961969c09b46bcb99ecc53dc,PodSandboxId:4a51e94ded92d0007f926dc7351992c3b4cede4002c8519a0c81caedb1765d66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960396615390999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 741481e5-8e38-4522-a9df-4b36e6d5cf9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d756e81dd539c675eb318993c70ab41462642f1b5453597bf056e58e2c988c8,PodSandboxId:b5b83094e8553b450df88f995847154baeb3f06ffe6fff4200a819752cab6b9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396454912739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9f75n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f3b51d-fced-4cd9-8c43-2a4eea28e470,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de983bf227ed9f2dd5b0374edd8200137a94287e4ebe645f27aa0a425ac995c3,PodSandboxId:b0986a8b7cde5aa9b4d174a33053a6f7dc8d9c7b0302502273eb3119cf087a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396380076731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8qrn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
f312704-4ea9-432d-85b2-67c59231187f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c825e53ea420a2d2461659e636fcb315c235fa942574960cc9af80f6c6a55c,PodSandboxId:29fc6b729b9b04b6cf152b7c2335c99d46d5e87a181075281b164d3fcb4434bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724960395556584259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25cmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ecfe58-b448-4db0-b4cc-434422ec4ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb08aba65a9c31d41df72b4f587dc8edf9b8e9aacef08ea30d8f916f4441664f,PodSandboxId:dcb65c7deae1da193eb3304a35cc32ac1fcbc3d02e6994e0f6739c00204e1021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960384884503776,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8043cb3a2563888629db8873a8265d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bf84f946f258f55baaf4ba337befe755f00da501a66468563afd31117ad426,PodSandboxId:17e2a8fcd578489f8b72593ad3234e5afcaad86e789f1da3d3415fc1cd8336c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960384845749702,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9403733df9a120c418b7b08ac7bdfa69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0793eb009f9d3fc92171f8acb7bd3a5f4cf639eb8d4499658e7c03b33fa027a4,PodSandboxId:8015eca353e55241d3acfc7efe93575b73ac72bd619b11e7ce7b634d69722b1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960384844115506,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40e977c7ae3bb7d4c4751e14efb0569,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237dcc747150f1f03920ce8cc96e6032a91caaab5c5c7d4d3b0a266570d6e79c,PodSandboxId:5ca3436db099bce2554cda0be5ca34434aa491689c6951047f4b9b21d952cc1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960384823025053,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d9ba1547d65f5bcc1780584a56df69ff861f6613de7d4d4c5c49bc19c34904,PodSandboxId:f7441c62737ea4ae3fa0a164ac96585e88b511af83478ff76b57246193ba296d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960099495647531,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=208b6daa-9318-4b93-9883-7c1646240b97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.745016865Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2ad2577b-234c-47d5-af30-bdd3fe2cabac name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.745403414Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:53d1324e26eaaa0c435ed041d91574a3ae1288994655b14124fa7e2d69438666,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-kb2c6,Uid:8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724960396468571402,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-kb2c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:39:56.143352050Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a51e94ded92d0007f926dc7351992c3b4cede4002c8519a0c81caedb1765d66,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:741481e5-8e38-4522-a9df-4b36e6d5cf9c,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724960396308322795,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 741481e5-8e38-4522-a9df-4b36e6d5cf9c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-29T19:39:56.000416950Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b0986a8b7cde5aa9b4d174a33053a6f7dc8d9c7b0302502273eb3119cf087a56,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-8qrn6,Uid:af312704-4ea9-432d-85b2-67c59231187f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724960395637593578,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-8qrn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af312704-4ea9-432d-85b2-67c59231187f,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:39:55.331243259Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b5b83094e8553b450df88f995847154baeb3f06ffe6fff4200a819752cab6b9f,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-9f75n,Uid:80f3b51d-fced-4cd9
-8c43-2a4eea28e470,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724960395615609541,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-9f75n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f3b51d-fced-4cd9-8c43-2a4eea28e470,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:39:55.307213320Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:29fc6b729b9b04b6cf152b7c2335c99d46d5e87a181075281b164d3fcb4434bf,Metadata:&PodSandboxMetadata{Name:kube-proxy-25cmq,Uid:35ecfe58-b448-4db0-b4cc-434422ec4ca6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724960395247610236,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-25cmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ecfe58-b448-4db0-b4cc-434422ec4ca6,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-29T19:39:54.932254885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5ca3436db099bce2554cda0be5ca34434aa491689c6951047f4b9b21d952cc1d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-920571,Uid:0cf6c373ff76da1ddcc3061449fd91f5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724960384655759229,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.243:8443,kubernetes.io/config.hash: 0cf6c373ff76da1ddcc3061449fd91f5,kubernetes.io/config.seen: 2024-08-29T19:39:44.212410524Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8015eca353e55241d3acfc7efe93
575b73ac72bd619b11e7ce7b634d69722b1f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-920571,Uid:f40e977c7ae3bb7d4c4751e14efb0569,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724960384644552133,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40e977c7ae3bb7d4c4751e14efb0569,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f40e977c7ae3bb7d4c4751e14efb0569,kubernetes.io/config.seen: 2024-08-29T19:39:44.212412806Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dcb65c7deae1da193eb3304a35cc32ac1fcbc3d02e6994e0f6739c00204e1021,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-920571,Uid:ce8043cb3a2563888629db8873a8265d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724960384638985959,Labels:map[string]string{component: kube-controller-mana
ger,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8043cb3a2563888629db8873a8265d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ce8043cb3a2563888629db8873a8265d,kubernetes.io/config.seen: 2024-08-29T19:39:44.212411928Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:17e2a8fcd578489f8b72593ad3234e5afcaad86e789f1da3d3415fc1cd8336c3,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-920571,Uid:9403733df9a120c418b7b08ac7bdfa69,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724960384638135082,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9403733df9a120c418b7b08ac7bdfa69,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.6
1.243:2379,kubernetes.io/config.hash: 9403733df9a120c418b7b08ac7bdfa69,kubernetes.io/config.seen: 2024-08-29T19:39:44.212406789Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f7441c62737ea4ae3fa0a164ac96585e88b511af83478ff76b57246193ba296d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-920571,Uid:0cf6c373ff76da1ddcc3061449fd91f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724960098147522610,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.243:8443,kubernetes.io/config.hash: 0cf6c373ff76da1ddcc3061449fd91f5,kubernetes.io/config.seen: 2024-08-29T19:34:57.655125839Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-coll
ector/interceptors.go:74" id=2ad2577b-234c-47d5-af30-bdd3fe2cabac name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.746267209Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae7ba01f-3fd8-40c2-8bec-028da35e39f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.746349457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae7ba01f-3fd8-40c2-8bec-028da35e39f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.746563772Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c8cd20fb8775b46859df2e4a9f52f38ebbb779f961969c09b46bcb99ecc53dc,PodSandboxId:4a51e94ded92d0007f926dc7351992c3b4cede4002c8519a0c81caedb1765d66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960396615390999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 741481e5-8e38-4522-a9df-4b36e6d5cf9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d756e81dd539c675eb318993c70ab41462642f1b5453597bf056e58e2c988c8,PodSandboxId:b5b83094e8553b450df88f995847154baeb3f06ffe6fff4200a819752cab6b9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396454912739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9f75n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f3b51d-fced-4cd9-8c43-2a4eea28e470,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de983bf227ed9f2dd5b0374edd8200137a94287e4ebe645f27aa0a425ac995c3,PodSandboxId:b0986a8b7cde5aa9b4d174a33053a6f7dc8d9c7b0302502273eb3119cf087a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396380076731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8qrn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
f312704-4ea9-432d-85b2-67c59231187f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c825e53ea420a2d2461659e636fcb315c235fa942574960cc9af80f6c6a55c,PodSandboxId:29fc6b729b9b04b6cf152b7c2335c99d46d5e87a181075281b164d3fcb4434bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724960395556584259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25cmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ecfe58-b448-4db0-b4cc-434422ec4ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb08aba65a9c31d41df72b4f587dc8edf9b8e9aacef08ea30d8f916f4441664f,PodSandboxId:dcb65c7deae1da193eb3304a35cc32ac1fcbc3d02e6994e0f6739c00204e1021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960384884503776,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8043cb3a2563888629db8873a8265d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bf84f946f258f55baaf4ba337befe755f00da501a66468563afd31117ad426,PodSandboxId:17e2a8fcd578489f8b72593ad3234e5afcaad86e789f1da3d3415fc1cd8336c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960384845749702,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9403733df9a120c418b7b08ac7bdfa69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0793eb009f9d3fc92171f8acb7bd3a5f4cf639eb8d4499658e7c03b33fa027a4,PodSandboxId:8015eca353e55241d3acfc7efe93575b73ac72bd619b11e7ce7b634d69722b1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960384844115506,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40e977c7ae3bb7d4c4751e14efb0569,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237dcc747150f1f03920ce8cc96e6032a91caaab5c5c7d4d3b0a266570d6e79c,PodSandboxId:5ca3436db099bce2554cda0be5ca34434aa491689c6951047f4b9b21d952cc1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960384823025053,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d9ba1547d65f5bcc1780584a56df69ff861f6613de7d4d4c5c49bc19c34904,PodSandboxId:f7441c62737ea4ae3fa0a164ac96585e88b511af83478ff76b57246193ba296d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960099495647531,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae7ba01f-3fd8-40c2-8bec-028da35e39f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.759805815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b9fd6cd-2e3b-45bf-a052-eea537703bc8 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.759932709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b9fd6cd-2e3b-45bf-a052-eea537703bc8 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.761059697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cbb96c64-a31b-4032-8573-9f7f70196d20 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.761461589Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961383761432557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbb96c64-a31b-4032-8573-9f7f70196d20 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.766148252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2d4c735-d7b9-4f6b-aded-da2d3655be3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.766257880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2d4c735-d7b9-4f6b-aded-da2d3655be3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.766747757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c8cd20fb8775b46859df2e4a9f52f38ebbb779f961969c09b46bcb99ecc53dc,PodSandboxId:4a51e94ded92d0007f926dc7351992c3b4cede4002c8519a0c81caedb1765d66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960396615390999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 741481e5-8e38-4522-a9df-4b36e6d5cf9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d756e81dd539c675eb318993c70ab41462642f1b5453597bf056e58e2c988c8,PodSandboxId:b5b83094e8553b450df88f995847154baeb3f06ffe6fff4200a819752cab6b9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396454912739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9f75n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f3b51d-fced-4cd9-8c43-2a4eea28e470,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de983bf227ed9f2dd5b0374edd8200137a94287e4ebe645f27aa0a425ac995c3,PodSandboxId:b0986a8b7cde5aa9b4d174a33053a6f7dc8d9c7b0302502273eb3119cf087a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396380076731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8qrn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
f312704-4ea9-432d-85b2-67c59231187f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c825e53ea420a2d2461659e636fcb315c235fa942574960cc9af80f6c6a55c,PodSandboxId:29fc6b729b9b04b6cf152b7c2335c99d46d5e87a181075281b164d3fcb4434bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724960395556584259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25cmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ecfe58-b448-4db0-b4cc-434422ec4ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb08aba65a9c31d41df72b4f587dc8edf9b8e9aacef08ea30d8f916f4441664f,PodSandboxId:dcb65c7deae1da193eb3304a35cc32ac1fcbc3d02e6994e0f6739c00204e1021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960384884503776,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8043cb3a2563888629db8873a8265d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bf84f946f258f55baaf4ba337befe755f00da501a66468563afd31117ad426,PodSandboxId:17e2a8fcd578489f8b72593ad3234e5afcaad86e789f1da3d3415fc1cd8336c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960384845749702,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9403733df9a120c418b7b08ac7bdfa69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0793eb009f9d3fc92171f8acb7bd3a5f4cf639eb8d4499658e7c03b33fa027a4,PodSandboxId:8015eca353e55241d3acfc7efe93575b73ac72bd619b11e7ce7b634d69722b1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960384844115506,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40e977c7ae3bb7d4c4751e14efb0569,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237dcc747150f1f03920ce8cc96e6032a91caaab5c5c7d4d3b0a266570d6e79c,PodSandboxId:5ca3436db099bce2554cda0be5ca34434aa491689c6951047f4b9b21d952cc1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960384823025053,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d9ba1547d65f5bcc1780584a56df69ff861f6613de7d4d4c5c49bc19c34904,PodSandboxId:f7441c62737ea4ae3fa0a164ac96585e88b511af83478ff76b57246193ba296d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960099495647531,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2d4c735-d7b9-4f6b-aded-da2d3655be3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.802574275Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5abff1af-5dfe-46fd-878d-f765bb804681 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.802667748Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5abff1af-5dfe-46fd-878d-f765bb804681 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.803575715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=166e7ed8-bcb1-49da-b86d-42f0a0567ad4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.804064280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961383804030815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=166e7ed8-bcb1-49da-b86d-42f0a0567ad4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.804557138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6648173c-5fe2-4bd3-a1c5-75237a40fde9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.804611115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6648173c-5fe2-4bd3-a1c5-75237a40fde9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:56:23 embed-certs-920571 crio[710]: time="2024-08-29 19:56:23.805188566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c8cd20fb8775b46859df2e4a9f52f38ebbb779f961969c09b46bcb99ecc53dc,PodSandboxId:4a51e94ded92d0007f926dc7351992c3b4cede4002c8519a0c81caedb1765d66,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960396615390999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 741481e5-8e38-4522-a9df-4b36e6d5cf9c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d756e81dd539c675eb318993c70ab41462642f1b5453597bf056e58e2c988c8,PodSandboxId:b5b83094e8553b450df88f995847154baeb3f06ffe6fff4200a819752cab6b9f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396454912739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-9f75n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80f3b51d-fced-4cd9-8c43-2a4eea28e470,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de983bf227ed9f2dd5b0374edd8200137a94287e4ebe645f27aa0a425ac995c3,PodSandboxId:b0986a8b7cde5aa9b4d174a33053a6f7dc8d9c7b0302502273eb3119cf087a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960396380076731,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8qrn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
f312704-4ea9-432d-85b2-67c59231187f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72c825e53ea420a2d2461659e636fcb315c235fa942574960cc9af80f6c6a55c,PodSandboxId:29fc6b729b9b04b6cf152b7c2335c99d46d5e87a181075281b164d3fcb4434bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724960395556584259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-25cmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35ecfe58-b448-4db0-b4cc-434422ec4ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb08aba65a9c31d41df72b4f587dc8edf9b8e9aacef08ea30d8f916f4441664f,PodSandboxId:dcb65c7deae1da193eb3304a35cc32ac1fcbc3d02e6994e0f6739c00204e1021,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960384884503776,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce8043cb3a2563888629db8873a8265d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26bf84f946f258f55baaf4ba337befe755f00da501a66468563afd31117ad426,PodSandboxId:17e2a8fcd578489f8b72593ad3234e5afcaad86e789f1da3d3415fc1cd8336c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960384845749702,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9403733df9a120c418b7b08ac7bdfa69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0793eb009f9d3fc92171f8acb7bd3a5f4cf639eb8d4499658e7c03b33fa027a4,PodSandboxId:8015eca353e55241d3acfc7efe93575b73ac72bd619b11e7ce7b634d69722b1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960384844115506,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f40e977c7ae3bb7d4c4751e14efb0569,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237dcc747150f1f03920ce8cc96e6032a91caaab5c5c7d4d3b0a266570d6e79c,PodSandboxId:5ca3436db099bce2554cda0be5ca34434aa491689c6951047f4b9b21d952cc1d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960384823025053,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8d9ba1547d65f5bcc1780584a56df69ff861f6613de7d4d4c5c49bc19c34904,PodSandboxId:f7441c62737ea4ae3fa0a164ac96585e88b511af83478ff76b57246193ba296d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960099495647531,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-920571,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cf6c373ff76da1ddcc3061449fd91f5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6648173c-5fe2-4bd3-a1c5-75237a40fde9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c8cd20fb8775       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   4a51e94ded92d       storage-provisioner
	5d756e81dd539       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   b5b83094e8553       coredns-6f6b679f8f-9f75n
	de983bf227ed9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   b0986a8b7cde5       coredns-6f6b679f8f-8qrn6
	72c825e53ea42       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   16 minutes ago      Running             kube-proxy                0                   29fc6b729b9b0       kube-proxy-25cmq
	eb08aba65a9c3       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   16 minutes ago      Running             kube-controller-manager   2                   dcb65c7deae1d       kube-controller-manager-embed-certs-920571
	26bf84f946f25       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   17e2a8fcd5784       etcd-embed-certs-920571
	0793eb009f9d3       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   16 minutes ago      Running             kube-scheduler            2                   8015eca353e55       kube-scheduler-embed-certs-920571
	237dcc747150f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   16 minutes ago      Running             kube-apiserver            2                   5ca3436db099b       kube-apiserver-embed-certs-920571
	e8d9ba1547d65       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 minutes ago      Exited              kube-apiserver            1                   f7441c62737ea       kube-apiserver-embed-certs-920571
	
	
	==> coredns [5d756e81dd539c675eb318993c70ab41462642f1b5453597bf056e58e2c988c8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [de983bf227ed9f2dd5b0374edd8200137a94287e4ebe645f27aa0a425ac995c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-920571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-920571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=embed-certs-920571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_39_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:39:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-920571
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:56:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:55:18 +0000   Thu, 29 Aug 2024 19:39:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:55:18 +0000   Thu, 29 Aug 2024 19:39:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:55:18 +0000   Thu, 29 Aug 2024 19:39:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:55:18 +0000   Thu, 29 Aug 2024 19:39:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.243
	  Hostname:    embed-certs-920571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 85be8983c3db432aa3105d0a59604c10
	  System UUID:                85be8983-c3db-432a-a310-5d0a59604c10
	  Boot ID:                    11f022a9-6b03-438a-9ef5-3b96d6649273
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-8qrn6                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-9f75n                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-920571                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-920571             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-920571    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-25cmq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-920571             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-kb2c6               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-920571 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-920571 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-920571 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-920571 event: Registered Node embed-certs-920571 in Controller
	  Normal  CIDRAssignmentFailed     16m   cidrAllocator    Node embed-certs-920571 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.050446] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035952] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.694086] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.914496] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.518927] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.935828] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.056566] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058678] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.174074] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.137591] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.281986] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +3.985971] systemd-fstab-generator[792]: Ignoring "noauto" option for root device
	[  +2.298735] systemd-fstab-generator[912]: Ignoring "noauto" option for root device
	[  +0.063002] kauditd_printk_skb: 158 callbacks suppressed
	[Aug29 19:35] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.828767] kauditd_printk_skb: 85 callbacks suppressed
	[Aug29 19:39] systemd-fstab-generator[2535]: Ignoring "noauto" option for root device
	[  +0.060698] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.990032] systemd-fstab-generator[2857]: Ignoring "noauto" option for root device
	[  +0.088072] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.793146] systemd-fstab-generator[2970]: Ignoring "noauto" option for root device
	[  +0.659371] kauditd_printk_skb: 34 callbacks suppressed
	[Aug29 19:40] kauditd_printk_skb: 64 callbacks suppressed
	
	
	==> etcd [26bf84f946f258f55baaf4ba337befe755f00da501a66468563afd31117ad426] <==
	{"level":"info","ts":"2024-08-29T19:39:45.755190Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f received MsgPreVoteResp from 704fd09e1c9dce1f at term 1"}
	{"level":"info","ts":"2024-08-29T19:39:45.755231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became candidate at term 2"}
	{"level":"info","ts":"2024-08-29T19:39:45.755264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f received MsgVoteResp from 704fd09e1c9dce1f at term 2"}
	{"level":"info","ts":"2024-08-29T19:39:45.755300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"704fd09e1c9dce1f became leader at term 2"}
	{"level":"info","ts":"2024-08-29T19:39:45.755332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 704fd09e1c9dce1f elected leader 704fd09e1c9dce1f at term 2"}
	{"level":"info","ts":"2024-08-29T19:39:45.760126Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"704fd09e1c9dce1f","local-member-attributes":"{Name:embed-certs-920571 ClientURLs:[https://192.168.61.243:2379]}","request-path":"/0/members/704fd09e1c9dce1f/attributes","cluster-id":"29cc905037b78c6d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:39:45.762033Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:39:45.762127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:39:45.762368Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:39:45.762406Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T19:39:45.762472Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:39:45.763110Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:39:45.766981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"29cc905037b78c6d","local-member-id":"704fd09e1c9dce1f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:39:45.767091Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:39:45.767133Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:39:45.767809Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:39:45.768543Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T19:39:45.779975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.243:2379"}
	{"level":"info","ts":"2024-08-29T19:49:45.810021Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":688}
	{"level":"info","ts":"2024-08-29T19:49:45.817650Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":688,"took":"7.294191ms","hash":61781546,"current-db-size-bytes":2215936,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2215936,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-29T19:49:45.817712Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":61781546,"revision":688,"compact-revision":-1}
	{"level":"info","ts":"2024-08-29T19:54:45.817375Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":931}
	{"level":"info","ts":"2024-08-29T19:54:45.821269Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":931,"took":"3.324571ms","hash":4157800339,"current-db-size-bytes":2215936,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1568768,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-29T19:54:45.821402Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4157800339,"revision":931,"compact-revision":688}
	{"level":"info","ts":"2024-08-29T19:55:59.569018Z","caller":"traceutil/trace.go:171","msg":"trace[25830843] transaction","detail":"{read_only:false; response_revision:1236; number_of_response:1; }","duration":"139.358772ms","start":"2024-08-29T19:55:59.429627Z","end":"2024-08-29T19:55:59.568986Z","steps":["trace[25830843] 'process raft request'  (duration: 119.821311ms)","trace[25830843] 'compare'  (duration: 19.163929ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:56:24 up 21 min,  0 users,  load average: 0.12, 0.17, 0.17
	Linux embed-certs-920571 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [237dcc747150f1f03920ce8cc96e6032a91caaab5c5c7d4d3b0a266570d6e79c] <==
	I0829 19:52:48.436271       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:52:48.437530       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:54:47.434719       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:54:47.434860       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 19:54:48.436379       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:54:48.436493       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0829 19:54:48.436400       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:54:48.436589       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 19:54:48.438208       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:54:48.438257       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:55:48.438656       1 handler_proxy.go:99] no RequestInfo found in the context
	W0829 19:55:48.438666       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:55:48.438931       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0829 19:55:48.438980       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 19:55:48.440069       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:55:48.440154       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [e8d9ba1547d65f5bcc1780584a56df69ff861f6613de7d4d4c5c49bc19c34904] <==
	W0829 19:39:39.611316       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.620061       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.695042       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.719858       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.735779       1 logging.go:55] [core] [Channel #20 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.758191       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.835180       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.875467       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.886181       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.888522       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.946646       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.991614       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:39.991725       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.020639       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.043657       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.055568       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.063230       1 logging.go:55] [core] [Channel #13 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.153111       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.180985       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.213237       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.243994       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.244198       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.336373       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.477398       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:40.813183       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [eb08aba65a9c31d41df72b4f587dc8edf9b8e9aacef08ea30d8f916f4441664f] <==
	I0829 19:50:55.074940       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:51:12.213374       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="165.175µs"
	E0829 19:51:24.453352       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:51:25.082330       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:51:25.210967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="136.783µs"
	E0829 19:51:54.459666       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:51:55.090581       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:52:24.467403       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:52:25.098683       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:52:54.474570       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:52:55.111287       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:53:24.480743       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:53:25.118391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:53:54.487059       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:53:55.126730       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:54:24.492809       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:54:25.133772       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:54:54.501706       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:54:55.145836       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:55:18.512342       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-920571"
	E0829 19:55:24.507591       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:55:25.154459       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:55:54.514537       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:55:55.163695       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:56:14.218574       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="422.107µs"
	
	
	==> kube-proxy [72c825e53ea420a2d2461659e636fcb315c235fa942574960cc9af80f6c6a55c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:39:56.174641       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:39:56.265129       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.243"]
	E0829 19:39:56.273040       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:39:56.518146       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:39:56.518206       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:39:56.518235       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:39:56.532020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:39:56.532248       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:39:56.532259       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:39:56.534036       1 config.go:197] "Starting service config controller"
	I0829 19:39:56.534059       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:39:56.534079       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:39:56.534088       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:39:56.534514       1 config.go:326] "Starting node config controller"
	I0829 19:39:56.534522       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:39:56.636022       1 shared_informer.go:320] Caches are synced for node config
	I0829 19:39:56.636049       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:39:56.636077       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0793eb009f9d3fc92171f8acb7bd3a5f4cf639eb8d4499658e7c03b33fa027a4] <==
	W0829 19:39:47.458611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 19:39:47.459854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:47.460546       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 19:39:47.460577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.261263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 19:39:48.261409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.306386       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 19:39:48.306434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.307286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 19:39:48.307328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.430464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 19:39:48.430519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.507772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 19:39:48.507822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.553646       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 19:39:48.553735       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0829 19:39:48.568081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0829 19:39:48.568125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.688603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 19:39:48.688674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.712472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 19:39:48.712525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 19:39:48.740668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 19:39:48.740772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0829 19:39:50.446857       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:55:26 embed-certs-920571 kubelet[2864]: E0829 19:55:26.198111    2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kb2c6" podUID="8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9"
	Aug 29 19:55:30 embed-certs-920571 kubelet[2864]: E0829 19:55:30.405332    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961330404936120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:30 embed-certs-920571 kubelet[2864]: E0829 19:55:30.405441    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961330404936120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:37 embed-certs-920571 kubelet[2864]: E0829 19:55:37.196577    2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kb2c6" podUID="8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9"
	Aug 29 19:55:40 embed-certs-920571 kubelet[2864]: E0829 19:55:40.411094    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961340410014874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:40 embed-certs-920571 kubelet[2864]: E0829 19:55:40.411267    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961340410014874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:49 embed-certs-920571 kubelet[2864]: E0829 19:55:49.197547    2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kb2c6" podUID="8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9"
	Aug 29 19:55:50 embed-certs-920571 kubelet[2864]: E0829 19:55:50.230105    2864 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:55:50 embed-certs-920571 kubelet[2864]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:55:50 embed-certs-920571 kubelet[2864]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:55:50 embed-certs-920571 kubelet[2864]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:55:50 embed-certs-920571 kubelet[2864]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:55:50 embed-certs-920571 kubelet[2864]: E0829 19:55:50.413416    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961350413155491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:50 embed-certs-920571 kubelet[2864]: E0829 19:55:50.413445    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961350413155491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:00 embed-certs-920571 kubelet[2864]: E0829 19:56:00.211166    2864 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 29 19:56:00 embed-certs-920571 kubelet[2864]: E0829 19:56:00.211485    2864 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 29 19:56:00 embed-certs-920571 kubelet[2864]: E0829 19:56:00.211858    2864 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjmcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-kb2c6_kube-system(8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 29 19:56:00 embed-certs-920571 kubelet[2864]: E0829 19:56:00.213184    2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-kb2c6" podUID="8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9"
	Aug 29 19:56:00 embed-certs-920571 kubelet[2864]: E0829 19:56:00.414855    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961360414607571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:00 embed-certs-920571 kubelet[2864]: E0829 19:56:00.414971    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961360414607571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:10 embed-certs-920571 kubelet[2864]: E0829 19:56:10.416737    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961370416349661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:10 embed-certs-920571 kubelet[2864]: E0829 19:56:10.417391    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961370416349661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:14 embed-certs-920571 kubelet[2864]: E0829 19:56:14.197382    2864 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-kb2c6" podUID="8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9"
	Aug 29 19:56:20 embed-certs-920571 kubelet[2864]: E0829 19:56:20.421436    2864 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961380420868331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:20 embed-certs-920571 kubelet[2864]: E0829 19:56:20.421773    2864 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961380420868331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8c8cd20fb8775b46859df2e4a9f52f38ebbb779f961969c09b46bcb99ecc53dc] <==
	I0829 19:39:56.833669       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 19:39:56.850614       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 19:39:56.850754       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 19:39:56.865323       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 19:39:56.865537       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-920571_1273688a-6773-4476-8330-4dc5dd3490c5!
	I0829 19:39:56.865636       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1414218a-6002-4eea-bfcc-2d73fa2d7d66", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-920571_1273688a-6773-4476-8330-4dc5dd3490c5 became leader
	I0829 19:39:56.965952       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-920571_1273688a-6773-4476-8330-4dc5dd3490c5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-920571 -n embed-certs-920571
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-920571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-kb2c6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-920571 describe pod metrics-server-6867b74b74-kb2c6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-920571 describe pod metrics-server-6867b74b74-kb2c6: exit status 1 (73.238961ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-kb2c6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-920571 describe pod metrics-server-6867b74b74-kb2c6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (437.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (465.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-29 19:57:10.015766719 +0000 UTC m=+6702.736559489
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-672127 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-672127 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.36µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-672127 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-672127 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-672127 logs -n 25: (1.117750792s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-672127  | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC | 29 Aug 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC |                     |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-690795                  | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC | 29 Aug 24 19:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-920571                 | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC | 29 Aug 24 19:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467349        | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-672127       | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:40 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467349             | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:55 UTC | 29 Aug 24 19:55 UTC |
	| start   | -p newest-cni-371258 --memory=2200 --alsologtostderr   | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:55 UTC | 29 Aug 24 19:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:55 UTC | 29 Aug 24 19:55 UTC |
	| addons  | enable metrics-server -p newest-cni-371258             | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:56 UTC | 29 Aug 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-371258                                   | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:56 UTC | 29 Aug 24 19:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-371258                  | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:56 UTC | 29 Aug 24 19:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-371258 --memory=2200 --alsologtostderr   | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:56 UTC | 29 Aug 24 19:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:56 UTC | 29 Aug 24 19:56 UTC |
	| image   | newest-cni-371258 image list                           | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:56 UTC | 29 Aug 24 19:56 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-371258                                   | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:56 UTC | 29 Aug 24 19:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-371258                                   | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:56 UTC | 29 Aug 24 19:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-371258                                   | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:57 UTC | 29 Aug 24 19:57 UTC |
	| delete  | -p newest-cni-371258                                   | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:57 UTC | 29 Aug 24 19:57 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:56:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:56:21.939651   86792 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:56:21.939759   86792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:56:21.939767   86792 out.go:358] Setting ErrFile to fd 2...
	I0829 19:56:21.939771   86792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:56:21.939941   86792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:56:21.940451   86792 out.go:352] Setting JSON to false
	I0829 19:56:21.941331   86792 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9529,"bootTime":1724951853,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:56:21.941388   86792 start.go:139] virtualization: kvm guest
	I0829 19:56:21.943333   86792 out.go:177] * [newest-cni-371258] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:56:21.944531   86792 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:56:21.944534   86792 notify.go:220] Checking for updates...
	I0829 19:56:21.945739   86792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:56:21.946958   86792 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:56:21.948111   86792 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:56:21.949231   86792 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:56:21.950277   86792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:56:21.951705   86792 config.go:182] Loaded profile config "newest-cni-371258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:56:21.952083   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:21.952141   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:21.967068   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36285
	I0829 19:56:21.967447   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:21.967935   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:21.967956   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:21.968283   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:21.968490   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:21.968766   86792 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:56:21.969152   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:21.969195   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:21.984712   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40821
	I0829 19:56:21.985069   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:21.985505   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:21.985525   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:21.985878   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:21.986072   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:22.022584   86792 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:56:22.023818   86792 start.go:297] selected driver: kvm2
	I0829 19:56:22.023830   86792 start.go:901] validating driver "kvm2" against &{Name:newest-cni-371258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-371258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:56:22.023944   86792 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:56:22.024595   86792 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:56:22.024669   86792 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:56:22.040730   86792 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:56:22.041118   86792 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0829 19:56:22.041181   86792 cni.go:84] Creating CNI manager for ""
	I0829 19:56:22.041195   86792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:56:22.041231   86792 start.go:340] cluster config:
	{Name:newest-cni-371258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-371258 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:56:22.041356   86792 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:56:22.042922   86792 out.go:177] * Starting "newest-cni-371258" primary control-plane node in "newest-cni-371258" cluster
	I0829 19:56:22.043915   86792 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:56:22.043954   86792 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:56:22.043961   86792 cache.go:56] Caching tarball of preloaded images
	I0829 19:56:22.044018   86792 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:56:22.044029   86792 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:56:22.044118   86792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258/config.json ...
	I0829 19:56:22.044304   86792 start.go:360] acquireMachinesLock for newest-cni-371258: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:56:22.044342   86792 start.go:364] duration metric: took 20.363µs to acquireMachinesLock for "newest-cni-371258"
	I0829 19:56:22.044356   86792 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:56:22.044361   86792 fix.go:54] fixHost starting: 
	I0829 19:56:22.044613   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:22.044641   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:22.059859   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42147
	I0829 19:56:22.060252   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:22.060690   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:22.060707   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:22.061011   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:22.061168   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:22.061316   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetState
	I0829 19:56:22.062904   86792 fix.go:112] recreateIfNeeded on newest-cni-371258: state=Stopped err=<nil>
	I0829 19:56:22.062928   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	W0829 19:56:22.063084   86792 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:56:22.065720   86792 out.go:177] * Restarting existing kvm2 VM for "newest-cni-371258" ...
	I0829 19:56:22.066931   86792 main.go:141] libmachine: (newest-cni-371258) Calling .Start
	I0829 19:56:22.067114   86792 main.go:141] libmachine: (newest-cni-371258) Ensuring networks are active...
	I0829 19:56:22.067968   86792 main.go:141] libmachine: (newest-cni-371258) Ensuring network default is active
	I0829 19:56:22.068347   86792 main.go:141] libmachine: (newest-cni-371258) Ensuring network mk-newest-cni-371258 is active
	I0829 19:56:22.068772   86792 main.go:141] libmachine: (newest-cni-371258) Getting domain xml...
	I0829 19:56:22.069537   86792 main.go:141] libmachine: (newest-cni-371258) Creating domain...
	I0829 19:56:23.355987   86792 main.go:141] libmachine: (newest-cni-371258) Waiting to get IP...
	I0829 19:56:23.356942   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:23.357432   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:23.357670   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:23.357559   86827 retry.go:31] will retry after 312.202117ms: waiting for machine to come up
	I0829 19:56:23.671163   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:23.671617   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:23.671667   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:23.671564   86827 retry.go:31] will retry after 362.559617ms: waiting for machine to come up
	I0829 19:56:24.036333   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:24.036793   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:24.036812   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:24.036741   86827 retry.go:31] will retry after 425.561954ms: waiting for machine to come up
	I0829 19:56:24.464263   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:24.464816   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:24.464849   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:24.464744   86827 retry.go:31] will retry after 497.361792ms: waiting for machine to come up
	I0829 19:56:24.963293   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:24.963687   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:24.963707   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:24.963640   86827 retry.go:31] will retry after 761.478816ms: waiting for machine to come up
	I0829 19:56:25.805319   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:25.805804   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:25.805832   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:25.805750   86827 retry.go:31] will retry after 779.700157ms: waiting for machine to come up
	I0829 19:56:26.586767   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:26.587201   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:26.587218   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:26.587182   86827 retry.go:31] will retry after 793.371117ms: waiting for machine to come up
	I0829 19:56:27.382378   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:27.382830   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:27.382857   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:27.382772   86827 retry.go:31] will retry after 1.042398043s: waiting for machine to come up
	I0829 19:56:28.426878   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:28.427273   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:28.427302   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:28.427237   86827 retry.go:31] will retry after 1.418797698s: waiting for machine to come up
	I0829 19:56:29.847217   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:29.847572   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:29.847601   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:29.847524   86827 retry.go:31] will retry after 1.861947467s: waiting for machine to come up
	I0829 19:56:31.710799   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:31.711211   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:31.711236   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:31.711165   86827 retry.go:31] will retry after 1.992232452s: waiting for machine to come up
	I0829 19:56:33.705404   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:33.705856   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:33.705901   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:33.705814   86827 retry.go:31] will retry after 2.223316914s: waiting for machine to come up
	I0829 19:56:35.931280   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:35.931668   86792 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:56:35.931705   86792 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:56:35.931631   86827 retry.go:31] will retry after 3.406304286s: waiting for machine to come up
	I0829 19:56:39.340864   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.341303   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has current primary IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.341344   86792 main.go:141] libmachine: (newest-cni-371258) Found IP for machine: 192.168.72.224
	I0829 19:56:39.341368   86792 main.go:141] libmachine: (newest-cni-371258) Reserving static IP address...
	I0829 19:56:39.341797   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "newest-cni-371258", mac: "52:54:00:3f:71:aa", ip: "192.168.72.224"} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:39.341818   86792 main.go:141] libmachine: (newest-cni-371258) DBG | skip adding static IP to network mk-newest-cni-371258 - found existing host DHCP lease matching {name: "newest-cni-371258", mac: "52:54:00:3f:71:aa", ip: "192.168.72.224"}
	I0829 19:56:39.341842   86792 main.go:141] libmachine: (newest-cni-371258) Reserved static IP address: 192.168.72.224
	I0829 19:56:39.341857   86792 main.go:141] libmachine: (newest-cni-371258) Waiting for SSH to be available...
	I0829 19:56:39.341866   86792 main.go:141] libmachine: (newest-cni-371258) DBG | Getting to WaitForSSH function...
	I0829 19:56:39.344008   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.344346   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:39.344381   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.344508   86792 main.go:141] libmachine: (newest-cni-371258) DBG | Using SSH client type: external
	I0829 19:56:39.344549   86792 main.go:141] libmachine: (newest-cni-371258) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/id_rsa (-rw-------)
	I0829 19:56:39.344607   86792 main.go:141] libmachine: (newest-cni-371258) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:56:39.344627   86792 main.go:141] libmachine: (newest-cni-371258) DBG | About to run SSH command:
	I0829 19:56:39.344643   86792 main.go:141] libmachine: (newest-cni-371258) DBG | exit 0
	I0829 19:56:39.465838   86792 main.go:141] libmachine: (newest-cni-371258) DBG | SSH cmd err, output: <nil>: 
	I0829 19:56:39.466187   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetConfigRaw
	I0829 19:56:39.466802   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetIP
	I0829 19:56:39.468872   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.469203   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:39.469233   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.469473   86792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258/config.json ...
	I0829 19:56:39.469667   86792 machine.go:93] provisionDockerMachine start ...
	I0829 19:56:39.469682   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:39.469885   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:39.471983   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.472280   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:39.472301   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.472372   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:39.472537   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:39.472696   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:39.472851   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:39.473033   86792 main.go:141] libmachine: Using SSH client type: native
	I0829 19:56:39.473268   86792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.224 22 <nil> <nil>}
	I0829 19:56:39.473283   86792 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:56:39.570055   86792 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:56:39.570083   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetMachineName
	I0829 19:56:39.570326   86792 buildroot.go:166] provisioning hostname "newest-cni-371258"
	I0829 19:56:39.570355   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetMachineName
	I0829 19:56:39.570556   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:39.573095   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.573386   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:39.573424   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.573543   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:39.573732   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:39.573875   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:39.573981   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:39.574163   86792 main.go:141] libmachine: Using SSH client type: native
	I0829 19:56:39.574312   86792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.224 22 <nil> <nil>}
	I0829 19:56:39.574326   86792 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-371258 && echo "newest-cni-371258" | sudo tee /etc/hostname
	I0829 19:56:39.686725   86792 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-371258
	
	I0829 19:56:39.686749   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:39.689598   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.690071   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:39.690142   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.690288   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:39.690478   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:39.690653   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:39.690929   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:39.691168   86792 main.go:141] libmachine: Using SSH client type: native
	I0829 19:56:39.691418   86792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.224 22 <nil> <nil>}
	I0829 19:56:39.691443   86792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-371258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-371258/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-371258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:56:39.797994   86792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:56:39.798024   86792 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:56:39.798040   86792 buildroot.go:174] setting up certificates
	I0829 19:56:39.798048   86792 provision.go:84] configureAuth start
	I0829 19:56:39.798056   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetMachineName
	I0829 19:56:39.798346   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetIP
	I0829 19:56:39.800990   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.801312   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:39.801336   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.801461   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:39.803733   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.804027   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:39.804058   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:39.804171   86792 provision.go:143] copyHostCerts
	I0829 19:56:39.804232   86792 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:56:39.804252   86792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:56:39.804332   86792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:56:39.804444   86792 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:56:39.804454   86792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:56:39.804484   86792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:56:39.804561   86792 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:56:39.804572   86792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:56:39.804597   86792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:56:39.804662   86792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.newest-cni-371258 san=[127.0.0.1 192.168.72.224 localhost minikube newest-cni-371258]
	I0829 19:56:40.000024   86792 provision.go:177] copyRemoteCerts
	I0829 19:56:40.000080   86792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:56:40.000110   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:40.003312   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.003665   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:40.003693   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.003844   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:40.004047   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:40.004216   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:40.004393   86792 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/id_rsa Username:docker}
	I0829 19:56:40.084460   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 19:56:40.107460   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:56:40.130599   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:56:40.152039   86792 provision.go:87] duration metric: took 353.980895ms to configureAuth
	I0829 19:56:40.152064   86792 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:56:40.152239   86792 config.go:182] Loaded profile config "newest-cni-371258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:56:40.152298   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:40.155309   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.155696   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:40.155727   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.155887   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:40.156094   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:40.156298   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:40.156453   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:40.156681   86792 main.go:141] libmachine: Using SSH client type: native
	I0829 19:56:40.156856   86792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.224 22 <nil> <nil>}
	I0829 19:56:40.156871   86792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:56:40.381395   86792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:56:40.381428   86792 machine.go:96] duration metric: took 911.749811ms to provisionDockerMachine
	I0829 19:56:40.381442   86792 start.go:293] postStartSetup for "newest-cni-371258" (driver="kvm2")
	I0829 19:56:40.381454   86792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:56:40.381482   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:40.381785   86792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:56:40.381818   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:40.384493   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.384857   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:40.384898   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.384981   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:40.385159   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:40.385320   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:40.385505   86792 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/id_rsa Username:docker}
	I0829 19:56:40.464329   86792 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:56:40.468185   86792 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:56:40.468211   86792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:56:40.468291   86792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:56:40.468382   86792 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:56:40.468492   86792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:56:40.478393   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:56:40.501324   86792 start.go:296] duration metric: took 119.869689ms for postStartSetup
	I0829 19:56:40.501356   86792 fix.go:56] duration metric: took 18.456994527s for fixHost
	I0829 19:56:40.501378   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:40.503696   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.503982   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:40.504013   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.504131   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:40.504326   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:40.504497   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:40.504646   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:40.504801   86792 main.go:141] libmachine: Using SSH client type: native
	I0829 19:56:40.504962   86792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.224 22 <nil> <nil>}
	I0829 19:56:40.504973   86792 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:56:40.602344   86792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724961400.578267037
	
	I0829 19:56:40.602368   86792 fix.go:216] guest clock: 1724961400.578267037
	I0829 19:56:40.602375   86792 fix.go:229] Guest: 2024-08-29 19:56:40.578267037 +0000 UTC Remote: 2024-08-29 19:56:40.501359508 +0000 UTC m=+18.598713952 (delta=76.907529ms)
	I0829 19:56:40.602413   86792 fix.go:200] guest clock delta is within tolerance: 76.907529ms
	I0829 19:56:40.602428   86792 start.go:83] releasing machines lock for "newest-cni-371258", held for 18.558076554s
	I0829 19:56:40.602454   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:40.602674   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetIP
	I0829 19:56:40.605052   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.605343   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:40.605374   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.605556   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:40.606021   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:40.606193   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:40.606277   86792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:56:40.606330   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:40.606406   86792 ssh_runner.go:195] Run: cat /version.json
	I0829 19:56:40.606428   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:40.608902   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.609155   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.609347   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:40.609375   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.609449   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:40.609586   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:40.609598   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:40.609616   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:40.609728   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:40.609776   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:40.609888   86792 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/id_rsa Username:docker}
	I0829 19:56:40.609983   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:40.610136   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:40.610264   86792 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/id_rsa Username:docker}
	I0829 19:56:40.682539   86792 ssh_runner.go:195] Run: systemctl --version
	I0829 19:56:40.724764   86792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:56:40.867810   86792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:56:40.874283   86792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:56:40.874369   86792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:56:40.889587   86792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:56:40.889608   86792 start.go:495] detecting cgroup driver to use...
	I0829 19:56:40.889658   86792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:56:40.907745   86792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:56:40.923374   86792 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:56:40.923467   86792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:56:40.936200   86792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:56:40.949039   86792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:56:41.069584   86792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:56:41.240157   86792 docker.go:233] disabling docker service ...
	I0829 19:56:41.240213   86792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:56:41.255149   86792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:56:41.267223   86792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:56:41.380433   86792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:56:41.494831   86792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:56:41.508116   86792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:56:41.525322   86792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:56:41.525394   86792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:56:41.536115   86792 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:56:41.536178   86792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:56:41.546970   86792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:56:41.556517   86792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:56:41.565960   86792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:56:41.576024   86792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:56:41.585970   86792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:56:41.601484   86792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:56:41.610997   86792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:56:41.619611   86792 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:56:41.619661   86792 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:56:41.632028   86792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:56:41.640793   86792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:56:41.760014   86792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:56:41.846165   86792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:56:41.846242   86792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:56:41.851192   86792 start.go:563] Will wait 60s for crictl version
	I0829 19:56:41.851251   86792 ssh_runner.go:195] Run: which crictl
	I0829 19:56:41.854566   86792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:56:41.889555   86792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:56:41.889631   86792 ssh_runner.go:195] Run: crio --version
	I0829 19:56:41.916098   86792 ssh_runner.go:195] Run: crio --version
	I0829 19:56:41.942719   86792 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:56:41.944175   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetIP
	I0829 19:56:41.947045   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:41.947348   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:41.947375   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:41.947595   86792 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 19:56:41.951338   86792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:56:41.964905   86792 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0829 19:56:41.966141   86792 kubeadm.go:883] updating cluster {Name:newest-cni-371258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-371258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:56:41.966258   86792 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:56:41.966328   86792 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:56:41.999624   86792 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:56:41.999683   86792 ssh_runner.go:195] Run: which lz4
	I0829 19:56:42.003184   86792 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:56:42.006890   86792 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:56:42.006924   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:56:43.200947   86792 crio.go:462] duration metric: took 1.197788924s to copy over tarball
	I0829 19:56:43.201011   86792 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:56:45.223534   86792 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.022497079s)
	I0829 19:56:45.223562   86792 crio.go:469] duration metric: took 2.022589038s to extract the tarball
	I0829 19:56:45.223570   86792 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:56:45.260263   86792 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:56:45.302080   86792 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:56:45.302110   86792 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:56:45.302118   86792 kubeadm.go:934] updating node { 192.168.72.224 8443 v1.31.0 crio true true} ...
	I0829 19:56:45.302214   86792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-371258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-371258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:56:45.302281   86792 ssh_runner.go:195] Run: crio config
	I0829 19:56:45.351756   86792 cni.go:84] Creating CNI manager for ""
	I0829 19:56:45.351777   86792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:56:45.351792   86792 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0829 19:56:45.351817   86792 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.224 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-371258 NodeName:newest-cni-371258 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:56:45.351946   86792 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.224
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-371258"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:56:45.352008   86792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:56:45.361181   86792 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:56:45.361240   86792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:56:45.369642   86792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0829 19:56:45.385694   86792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:56:45.401695   86792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0829 19:56:45.419423   86792 ssh_runner.go:195] Run: grep 192.168.72.224	control-plane.minikube.internal$ /etc/hosts
	I0829 19:56:45.422923   86792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.224	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:56:45.433910   86792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:56:45.548309   86792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:56:45.563858   86792 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258 for IP: 192.168.72.224
	I0829 19:56:45.563887   86792 certs.go:194] generating shared ca certs ...
	I0829 19:56:45.563908   86792 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:56:45.564108   86792 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:56:45.564158   86792 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:56:45.564171   86792 certs.go:256] generating profile certs ...
	I0829 19:56:45.564255   86792 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258/client.key
	I0829 19:56:45.564325   86792 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258/apiserver.key.4e984e8b
	I0829 19:56:45.564365   86792 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258/proxy-client.key
	I0829 19:56:45.564498   86792 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:56:45.564553   86792 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:56:45.564568   86792 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:56:45.564605   86792 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:56:45.564645   86792 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:56:45.564679   86792 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:56:45.564726   86792 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:56:45.565521   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:56:45.608305   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:56:45.655209   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:56:45.686470   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:56:45.721809   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:56:45.752860   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 19:56:45.777617   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:56:45.799041   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:56:45.820792   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:56:45.841767   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:56:45.863226   86792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:56:45.884250   86792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:56:45.899041   86792 ssh_runner.go:195] Run: openssl version
	I0829 19:56:45.904396   86792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:56:45.914051   86792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:56:45.918160   86792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:56:45.918219   86792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:56:45.923887   86792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:56:45.933950   86792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:56:45.943948   86792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:56:45.948007   86792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:56:45.948055   86792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:56:45.953220   86792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:56:45.962977   86792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:56:45.972538   86792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:56:45.976647   86792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:56:45.976689   86792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:56:45.981828   86792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:56:45.991816   86792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:56:45.996009   86792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:56:46.001419   86792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:56:46.006859   86792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:56:46.012280   86792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:56:46.017638   86792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:56:46.023097   86792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:56:46.028418   86792 kubeadm.go:392] StartCluster: {Name:newest-cni-371258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-371258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:56:46.028519   86792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:56:46.028575   86792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:56:46.063568   86792 cri.go:89] found id: ""
	I0829 19:56:46.063646   86792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:56:46.073098   86792 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:56:46.073120   86792 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:56:46.073166   86792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:56:46.082686   86792 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:56:46.083348   86792 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-371258" does not appear in /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:56:46.083655   86792 kubeconfig.go:62] /home/jenkins/minikube-integration/19531-13056/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-371258" cluster setting kubeconfig missing "newest-cni-371258" context setting]
	I0829 19:56:46.084241   86792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:56:46.085796   86792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:56:46.094493   86792 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.224
	I0829 19:56:46.094515   86792 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:56:46.094524   86792 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:56:46.094569   86792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:56:46.132174   86792 cri.go:89] found id: ""
	I0829 19:56:46.132247   86792 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:56:46.147108   86792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:56:46.156292   86792 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:56:46.156438   86792 kubeadm.go:157] found existing configuration files:
	
	I0829 19:56:46.156497   86792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:56:46.164970   86792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:56:46.165022   86792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:56:46.173821   86792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:56:46.182418   86792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:56:46.182476   86792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:56:46.191349   86792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:56:46.199821   86792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:56:46.199875   86792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:56:46.208945   86792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:56:46.217949   86792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:56:46.218020   86792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:56:46.226720   86792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:56:46.235800   86792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:56:46.338199   86792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:56:47.436051   86792 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.097819447s)
	I0829 19:56:47.436084   86792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:56:47.644094   86792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:56:47.703712   86792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:56:47.795065   86792 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:56:47.795134   86792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:56:48.295848   86792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:56:48.795690   86792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:56:49.296008   86792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:56:49.795627   86792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:56:50.295496   86792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:56:50.308501   86792 api_server.go:72] duration metric: took 2.513437049s to wait for apiserver process to appear ...
	I0829 19:56:50.308531   86792 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:56:50.308567   86792 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0829 19:56:52.773656   86792 api_server.go:279] https://192.168.72.224:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:56:52.773694   86792 api_server.go:103] status: https://192.168.72.224:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:56:52.773708   86792 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0829 19:56:52.798207   86792 api_server.go:279] https://192.168.72.224:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:56:52.798233   86792 api_server.go:103] status: https://192.168.72.224:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:56:52.809483   86792 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0829 19:56:52.839853   86792 api_server.go:279] https://192.168.72.224:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:56:52.839879   86792 api_server.go:103] status: https://192.168.72.224:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:56:53.309492   86792 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0829 19:56:53.314278   86792 api_server.go:279] https://192.168.72.224:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:56:53.314311   86792 api_server.go:103] status: https://192.168.72.224:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:56:53.808862   86792 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0829 19:56:53.814127   86792 api_server.go:279] https://192.168.72.224:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:56:53.814152   86792 api_server.go:103] status: https://192.168.72.224:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:56:54.309284   86792 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0829 19:56:54.316351   86792 api_server.go:279] https://192.168.72.224:8443/healthz returned 200:
	ok
	I0829 19:56:54.324766   86792 api_server.go:141] control plane version: v1.31.0
	I0829 19:56:54.324794   86792 api_server.go:131] duration metric: took 4.016256622s to wait for apiserver health ...
	I0829 19:56:54.324802   86792 cni.go:84] Creating CNI manager for ""
	I0829 19:56:54.324809   86792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:56:54.326514   86792 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:56:54.327840   86792 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:56:54.349439   86792 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:56:54.382816   86792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:56:54.397270   86792 system_pods.go:59] 8 kube-system pods found
	I0829 19:56:54.397304   86792 system_pods.go:61] "coredns-6f6b679f8f-5sm65" [a8b3126e-7ec8-4300-b816-732852274637] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:56:54.397312   86792 system_pods.go:61] "etcd-newest-cni-371258" [e6d5fe4b-e105-4d5f-b12f-0eb17b6f5c6a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:56:54.397321   86792 system_pods.go:61] "kube-apiserver-newest-cni-371258" [55ce2c05-d584-475f-b03d-193c502cb7fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:56:54.397331   86792 system_pods.go:61] "kube-controller-manager-newest-cni-371258" [70986112-8b74-4630-8599-3b01028c361c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:56:54.397338   86792 system_pods.go:61] "kube-proxy-bk9bt" [837c6a5f-4a09-4797-9035-244cd2bf974a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 19:56:54.397344   86792 system_pods.go:61] "kube-scheduler-newest-cni-371258" [6d0242a1-3da1-4b29-8859-8bd662a40b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:56:54.397349   86792 system_pods.go:61] "metrics-server-6867b74b74-82cwq" [6e02c52e-6974-4d74-8daf-4d7889efe968] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:56:54.397355   86792 system_pods.go:61] "storage-provisioner" [b4972175-6816-4d47-9b32-3267d5606bfd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:56:54.397362   86792 system_pods.go:74] duration metric: took 14.518933ms to wait for pod list to return data ...
	I0829 19:56:54.397371   86792 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:56:54.401556   86792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:56:54.401582   86792 node_conditions.go:123] node cpu capacity is 2
	I0829 19:56:54.401590   86792 node_conditions.go:105] duration metric: took 4.213748ms to run NodePressure ...
	I0829 19:56:54.401607   86792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:56:54.683522   86792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:56:54.694203   86792 ops.go:34] apiserver oom_adj: -16
	I0829 19:56:54.694228   86792 kubeadm.go:597] duration metric: took 8.621101137s to restartPrimaryControlPlane
	I0829 19:56:54.694239   86792 kubeadm.go:394] duration metric: took 8.665829809s to StartCluster
	I0829 19:56:54.694280   86792 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:56:54.694362   86792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:56:54.695407   86792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:56:54.695693   86792 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.224 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:56:54.695867   86792 config.go:182] Loaded profile config "newest-cni-371258": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:56:54.695827   86792 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:56:54.695930   86792 addons.go:69] Setting dashboard=true in profile "newest-cni-371258"
	I0829 19:56:54.695940   86792 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-371258"
	I0829 19:56:54.695949   86792 addons.go:69] Setting default-storageclass=true in profile "newest-cni-371258"
	I0829 19:56:54.695970   86792 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-371258"
	W0829 19:56:54.695978   86792 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:56:54.695982   86792 addons.go:69] Setting metrics-server=true in profile "newest-cni-371258"
	I0829 19:56:54.695999   86792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-371258"
	I0829 19:56:54.696009   86792 host.go:66] Checking if "newest-cni-371258" exists ...
	I0829 19:56:54.696024   86792 addons.go:234] Setting addon metrics-server=true in "newest-cni-371258"
	W0829 19:56:54.696037   86792 addons.go:243] addon metrics-server should already be in state true
	I0829 19:56:54.696070   86792 host.go:66] Checking if "newest-cni-371258" exists ...
	I0829 19:56:54.695963   86792 addons.go:234] Setting addon dashboard=true in "newest-cni-371258"
	W0829 19:56:54.696093   86792 addons.go:243] addon dashboard should already be in state true
	I0829 19:56:54.696116   86792 host.go:66] Checking if "newest-cni-371258" exists ...
	I0829 19:56:54.696398   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:54.696451   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:54.696445   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:54.696460   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:54.696480   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:54.696483   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:54.696516   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:54.696486   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:54.698035   86792 out.go:177] * Verifying Kubernetes components...
	I0829 19:56:54.703193   86792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:56:54.712560   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41069
	I0829 19:56:54.713097   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:54.713758   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:54.713785   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:54.714209   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:54.714731   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46049
	I0829 19:56:54.714856   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:54.714883   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:54.715111   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:54.715677   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:54.715700   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:54.715949   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0829 19:56:54.716206   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:54.716417   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:54.716824   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:54.716840   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:54.716869   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:54.717001   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:54.717080   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46775
	I0829 19:56:54.717371   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:54.717535   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetState
	I0829 19:56:54.717645   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:54.718940   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:54.718961   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:54.719253   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:54.719813   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:54.719857   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:54.721254   86792 addons.go:234] Setting addon default-storageclass=true in "newest-cni-371258"
	W0829 19:56:54.721277   86792 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:56:54.721305   86792 host.go:66] Checking if "newest-cni-371258" exists ...
	I0829 19:56:54.721671   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:54.721714   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:54.731964   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I0829 19:56:54.732377   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:54.732948   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:54.732976   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:54.733019   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I0829 19:56:54.733451   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:54.733453   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:54.733650   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetState
	I0829 19:56:54.733872   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:54.733887   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:54.734166   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:54.734337   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetState
	I0829 19:56:54.735550   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:54.736106   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:54.738100   86792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:56:54.738144   86792 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:56:54.739303   86792 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:56:54.739320   86792 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:56:54.739340   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:54.739360   86792 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:56:54.739379   86792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:56:54.739397   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:54.741760   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37609
	I0829 19:56:54.742854   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:54.743257   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:54.743299   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:54.743441   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:54.743459   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:54.743836   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:54.743843   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:54.743904   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:54.743916   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:54.743922   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46367
	I0829 19:56:54.744063   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetState
	I0829 19:56:54.744158   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:54.744646   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:54.744741   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:54.744796   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:54.744810   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:54.745003   86792 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/id_rsa Username:docker}
	I0829 19:56:54.745203   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:54.745355   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:54.745375   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:54.745575   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:54.745773   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:54.745814   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:54.745814   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:54.745953   86792 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/id_rsa Username:docker}
	I0829 19:56:54.747585   86792 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0829 19:56:54.748702   86792 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0829 19:56:54.749644   86792 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0829 19:56:54.749657   86792 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0829 19:56:54.749681   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:54.750886   86792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:56:54.750928   86792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:56:54.752885   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:54.753246   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:54.753278   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:54.753501   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:54.753729   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:54.753865   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:54.754142   86792 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/id_rsa Username:docker}
	I0829 19:56:54.767244   86792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43057
	I0829 19:56:54.767712   86792 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:56:54.768147   86792 main.go:141] libmachine: Using API Version  1
	I0829 19:56:54.768164   86792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:56:54.768479   86792 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:56:54.768680   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetState
	I0829 19:56:54.770319   86792 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:56:54.770571   86792 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:56:54.770587   86792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:56:54.770606   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHHostname
	I0829 19:56:54.773301   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:54.773721   86792 main.go:141] libmachine: (newest-cni-371258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:71:aa", ip: ""} in network mk-newest-cni-371258: {Iface:virbr4 ExpiryTime:2024-08-29 20:56:32 +0000 UTC Type:0 Mac:52:54:00:3f:71:aa Iaid: IPaddr:192.168.72.224 Prefix:24 Hostname:newest-cni-371258 Clientid:01:52:54:00:3f:71:aa}
	I0829 19:56:54.773754   86792 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined IP address 192.168.72.224 and MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:56:54.773869   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHPort
	I0829 19:56:54.774040   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHKeyPath
	I0829 19:56:54.774246   86792 main.go:141] libmachine: (newest-cni-371258) Calling .GetSSHUsername
	I0829 19:56:54.774430   86792 sshutil.go:53] new ssh client: &{IP:192.168.72.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/id_rsa Username:docker}
	I0829 19:56:54.940181   86792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:56:54.965382   86792 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:56:54.965471   86792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:56:54.982397   86792 api_server.go:72] duration metric: took 286.660745ms to wait for apiserver process to appear ...
	I0829 19:56:54.982428   86792 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:56:54.982451   86792 api_server.go:253] Checking apiserver healthz at https://192.168.72.224:8443/healthz ...
	I0829 19:56:54.990231   86792 api_server.go:279] https://192.168.72.224:8443/healthz returned 200:
	ok
	I0829 19:56:54.991727   86792 api_server.go:141] control plane version: v1.31.0
	I0829 19:56:54.991757   86792 api_server.go:131] duration metric: took 9.317995ms to wait for apiserver health ...
	I0829 19:56:54.991767   86792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:56:54.996285   86792 system_pods.go:59] 8 kube-system pods found
	I0829 19:56:54.996322   86792 system_pods.go:61] "coredns-6f6b679f8f-5sm65" [a8b3126e-7ec8-4300-b816-732852274637] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:56:54.996333   86792 system_pods.go:61] "etcd-newest-cni-371258" [e6d5fe4b-e105-4d5f-b12f-0eb17b6f5c6a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:56:54.996344   86792 system_pods.go:61] "kube-apiserver-newest-cni-371258" [55ce2c05-d584-475f-b03d-193c502cb7fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:56:54.996352   86792 system_pods.go:61] "kube-controller-manager-newest-cni-371258" [70986112-8b74-4630-8599-3b01028c361c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:56:54.996359   86792 system_pods.go:61] "kube-proxy-bk9bt" [837c6a5f-4a09-4797-9035-244cd2bf974a] Running
	I0829 19:56:54.996367   86792 system_pods.go:61] "kube-scheduler-newest-cni-371258" [6d0242a1-3da1-4b29-8859-8bd662a40b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:56:54.996384   86792 system_pods.go:61] "metrics-server-6867b74b74-82cwq" [6e02c52e-6974-4d74-8daf-4d7889efe968] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:56:54.996394   86792 system_pods.go:61] "storage-provisioner" [b4972175-6816-4d47-9b32-3267d5606bfd] Running
	I0829 19:56:54.996401   86792 system_pods.go:74] duration metric: took 4.628227ms to wait for pod list to return data ...
	I0829 19:56:54.996411   86792 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:56:54.999354   86792 default_sa.go:45] found service account: "default"
	I0829 19:56:54.999380   86792 default_sa.go:55] duration metric: took 2.956463ms for default service account to be created ...
	I0829 19:56:54.999393   86792 kubeadm.go:582] duration metric: took 303.663118ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0829 19:56:54.999413   86792 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:56:55.001500   86792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:56:55.001519   86792 node_conditions.go:123] node cpu capacity is 2
	I0829 19:56:55.001531   86792 node_conditions.go:105] duration metric: took 2.108982ms to run NodePressure ...
	I0829 19:56:55.001543   86792 start.go:241] waiting for startup goroutines ...
	I0829 19:56:55.070268   86792 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:56:55.070306   86792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:56:55.085403   86792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:56:55.095157   86792 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:56:55.095192   86792 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:56:55.113187   86792 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0829 19:56:55.113221   86792 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0829 19:56:55.125789   86792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:56:55.144498   86792 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0829 19:56:55.144524   86792 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0829 19:56:55.167338   86792 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0829 19:56:55.167364   86792 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0829 19:56:55.167940   86792 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:56:55.167962   86792 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:56:55.210379   86792 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0829 19:56:55.210403   86792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0829 19:56:55.251032   86792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:56:55.279563   86792 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0829 19:56:55.279592   86792 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0829 19:56:55.341355   86792 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0829 19:56:55.341387   86792 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0829 19:56:55.404655   86792 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0829 19:56:55.404682   86792 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0829 19:56:55.503804   86792 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0829 19:56:55.503843   86792 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0829 19:56:55.572851   86792 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0829 19:56:55.572878   86792 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0829 19:56:55.636046   86792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0829 19:56:56.745768   86792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.660324078s)
	I0829 19:56:56.745813   86792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.61999164s)
	I0829 19:56:56.745852   86792 main.go:141] libmachine: Making call to close driver server
	I0829 19:56:56.745870   86792 main.go:141] libmachine: (newest-cni-371258) Calling .Close
	I0829 19:56:56.745855   86792 main.go:141] libmachine: Making call to close driver server
	I0829 19:56:56.745935   86792 main.go:141] libmachine: (newest-cni-371258) Calling .Close
	I0829 19:56:56.746290   86792 main.go:141] libmachine: (newest-cni-371258) DBG | Closing plugin on server side
	I0829 19:56:56.746294   86792 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:56:56.746331   86792 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:56:56.746341   86792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:56:56.746326   86792 main.go:141] libmachine: (newest-cni-371258) DBG | Closing plugin on server side
	I0829 19:56:56.746355   86792 main.go:141] libmachine: Making call to close driver server
	I0829 19:56:56.746362   86792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:56:56.746374   86792 main.go:141] libmachine: Making call to close driver server
	I0829 19:56:56.746390   86792 main.go:141] libmachine: (newest-cni-371258) Calling .Close
	I0829 19:56:56.746366   86792 main.go:141] libmachine: (newest-cni-371258) Calling .Close
	I0829 19:56:56.746735   86792 main.go:141] libmachine: (newest-cni-371258) DBG | Closing plugin on server side
	I0829 19:56:56.746767   86792 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:56:56.746774   86792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:56:56.746850   86792 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:56:56.746895   86792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:56:56.746892   86792 main.go:141] libmachine: (newest-cni-371258) DBG | Closing plugin on server side
	I0829 19:56:56.755869   86792 main.go:141] libmachine: Making call to close driver server
	I0829 19:56:56.755890   86792 main.go:141] libmachine: (newest-cni-371258) Calling .Close
	I0829 19:56:56.756201   86792 main.go:141] libmachine: (newest-cni-371258) DBG | Closing plugin on server side
	I0829 19:56:56.756217   86792 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:56:56.756231   86792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:56:56.977249   86792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.726155671s)
	I0829 19:56:56.977308   86792 main.go:141] libmachine: Making call to close driver server
	I0829 19:56:56.977326   86792 main.go:141] libmachine: (newest-cni-371258) Calling .Close
	I0829 19:56:56.977611   86792 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:56:56.977631   86792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:56:56.977630   86792 main.go:141] libmachine: (newest-cni-371258) DBG | Closing plugin on server side
	I0829 19:56:56.977639   86792 main.go:141] libmachine: Making call to close driver server
	I0829 19:56:56.977646   86792 main.go:141] libmachine: (newest-cni-371258) Calling .Close
	I0829 19:56:56.977897   86792 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:56:56.977918   86792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:56:56.977928   86792 addons.go:475] Verifying addon metrics-server=true in "newest-cni-371258"
	I0829 19:56:57.318048   86792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.68193977s)
	I0829 19:56:57.318112   86792 main.go:141] libmachine: Making call to close driver server
	I0829 19:56:57.318130   86792 main.go:141] libmachine: (newest-cni-371258) Calling .Close
	I0829 19:56:57.318701   86792 main.go:141] libmachine: (newest-cni-371258) DBG | Closing plugin on server side
	I0829 19:56:57.318802   86792 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:56:57.318821   86792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:56:57.318846   86792 main.go:141] libmachine: Making call to close driver server
	I0829 19:56:57.318859   86792 main.go:141] libmachine: (newest-cni-371258) Calling .Close
	I0829 19:56:57.319116   86792 main.go:141] libmachine: (newest-cni-371258) DBG | Closing plugin on server side
	I0829 19:56:57.319148   86792 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:56:57.319158   86792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:56:57.320271   86792 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-371258 addons enable metrics-server
	
	I0829 19:56:57.321230   86792 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0829 19:56:57.322309   86792 addons.go:510] duration metric: took 2.626499933s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0829 19:56:57.322352   86792 start.go:246] waiting for cluster config update ...
	I0829 19:56:57.322368   86792 start.go:255] writing updated cluster config ...
	I0829 19:56:57.322612   86792 ssh_runner.go:195] Run: rm -f paused
	I0829 19:56:57.371213   86792 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:56:57.373953   86792 out.go:177] * Done! kubectl is now configured to use "newest-cni-371258" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.547070958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961430547049331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=210eadb5-77bc-4d50-ae99-d83cca75b842 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.547521512Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4332fbf-0e79-4dae-ad4a-0704eb7bd423 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.547587174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4332fbf-0e79-4dae-ad4a-0704eb7bd423 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.547844479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:618b4f781c25c0bae32f14d563f92b13c00f2f8ba3cb26883d763e52b32aa53a,PodSandboxId:fd8c012e46279448c931f423b7c0a3edc3c50acd52330a1662a5e1fbda7f2d21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960414537805680,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cee7be91cef6a4df95a6f30e1245ac788cdf5b471cc32cf8e6c534b463530dc4,PodSandboxId:f5c11184c99c8cd96632f007ad2b6681d1781872c599eb669f69cea3d2db1ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960414000354059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dxbt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84373054-e72e-469a-bf2f-101943117851,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2381ec99fe28ee5575c33d65237e0e562a5a6ea70fbcc8da25e91c230b77cee9,PodSandboxId:b95b02a1645a8a880ea12c901e2ae926652ee0b858e60387e7814bb1bbcdc516,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960413932382183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5p2vn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8f7749c2-4cb3-4372-8144-46109f9b89b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:046b89ea511cf12db05b7fa3ffd5bef78b13fe226543cc3e898fac1885518f19,PodSandboxId:c151165840d433405994ae76c44ab066d8587d2fef8d08bb7ae04099359e6b87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724960413117330203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqbn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b48a1f-725b-45b7-8a3f-0df0f3371d2f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13be5321a8c80c0020d830f57402d3a50b64f997dca0b352f757d34343265afa,PodSandboxId:9035323cb17ef464700e3e048a60728bebc848c7e0a0e9ab6728f05cc9b1e490,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960402364816845
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e975e1c07c22e3743cd74281083965,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426978e9357aa278ce81b3fee9d9f96b0a2cd12753daf1617d02becfc623cbb2,PodSandboxId:816f69dd8790e2dbf99e15b68637854971abfda4bcb97800f0397dc9414a0134,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172496040234
1690608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c5943896148d6b6042e7091fd9bb931,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e7abefb26ad39be0f3ebe8168138685a191ae8c59b8d277d341d9c157138f2,PodSandboxId:c8760b4447461beff647cfc37fff2080daffa0b27d206b8edb07279585f8e23a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172496
0402338040450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e12510189d6529718bce6143f8cb7f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9dc4baf0d69e3869f713b211d9f67465b96abe843f5578b9be3ebb9b8f0126,PodSandboxId:c5e190c693572af8beafaaa3d5eabece379ea814116846d388f8f8c76533ae93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960402320080199,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c253f2be8a7770ae53a7ebe3387b8b9ffac2c47439aab92c182013ed3f9266,PodSandboxId:54135f9d2371b0dabaf8853e269037ee4dd3572c016f276a4863e20e9593559c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960118921682055,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4332fbf-0e79-4dae-ad4a-0704eb7bd423 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.584774316Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29271a5f-5f88-4012-8764-1c8f2b1bc060 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.584860707Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29271a5f-5f88-4012-8764-1c8f2b1bc060 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.586341151Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2efbc172-62d6-48e3-9011-97183f06908f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.586796622Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961430586773390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2efbc172-62d6-48e3-9011-97183f06908f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.587485647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=573c39c9-0f52-42b8-ab4f-3c95e653b5d9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.587546246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=573c39c9-0f52-42b8-ab4f-3c95e653b5d9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.587732255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:618b4f781c25c0bae32f14d563f92b13c00f2f8ba3cb26883d763e52b32aa53a,PodSandboxId:fd8c012e46279448c931f423b7c0a3edc3c50acd52330a1662a5e1fbda7f2d21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960414537805680,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cee7be91cef6a4df95a6f30e1245ac788cdf5b471cc32cf8e6c534b463530dc4,PodSandboxId:f5c11184c99c8cd96632f007ad2b6681d1781872c599eb669f69cea3d2db1ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960414000354059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dxbt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84373054-e72e-469a-bf2f-101943117851,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2381ec99fe28ee5575c33d65237e0e562a5a6ea70fbcc8da25e91c230b77cee9,PodSandboxId:b95b02a1645a8a880ea12c901e2ae926652ee0b858e60387e7814bb1bbcdc516,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960413932382183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5p2vn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8f7749c2-4cb3-4372-8144-46109f9b89b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:046b89ea511cf12db05b7fa3ffd5bef78b13fe226543cc3e898fac1885518f19,PodSandboxId:c151165840d433405994ae76c44ab066d8587d2fef8d08bb7ae04099359e6b87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724960413117330203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqbn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b48a1f-725b-45b7-8a3f-0df0f3371d2f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13be5321a8c80c0020d830f57402d3a50b64f997dca0b352f757d34343265afa,PodSandboxId:9035323cb17ef464700e3e048a60728bebc848c7e0a0e9ab6728f05cc9b1e490,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960402364816845
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e975e1c07c22e3743cd74281083965,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426978e9357aa278ce81b3fee9d9f96b0a2cd12753daf1617d02becfc623cbb2,PodSandboxId:816f69dd8790e2dbf99e15b68637854971abfda4bcb97800f0397dc9414a0134,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172496040234
1690608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c5943896148d6b6042e7091fd9bb931,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e7abefb26ad39be0f3ebe8168138685a191ae8c59b8d277d341d9c157138f2,PodSandboxId:c8760b4447461beff647cfc37fff2080daffa0b27d206b8edb07279585f8e23a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172496
0402338040450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e12510189d6529718bce6143f8cb7f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9dc4baf0d69e3869f713b211d9f67465b96abe843f5578b9be3ebb9b8f0126,PodSandboxId:c5e190c693572af8beafaaa3d5eabece379ea814116846d388f8f8c76533ae93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960402320080199,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c253f2be8a7770ae53a7ebe3387b8b9ffac2c47439aab92c182013ed3f9266,PodSandboxId:54135f9d2371b0dabaf8853e269037ee4dd3572c016f276a4863e20e9593559c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960118921682055,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=573c39c9-0f52-42b8-ab4f-3c95e653b5d9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.621151514Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f2a314c-8c14-4ea4-acca-840b6ca134ef name=/runtime.v1.RuntimeService/Version
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.621263242Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f2a314c-8c14-4ea4-acca-840b6ca134ef name=/runtime.v1.RuntimeService/Version
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.622252039Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=faa7ab10-dadc-4e03-b063-0e5d8a73bea5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.622651281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961430622632212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=faa7ab10-dadc-4e03-b063-0e5d8a73bea5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.623131940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b16678ed-f698-4c13-bbca-1e2186362cfe name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.623197856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b16678ed-f698-4c13-bbca-1e2186362cfe name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.623520382Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:618b4f781c25c0bae32f14d563f92b13c00f2f8ba3cb26883d763e52b32aa53a,PodSandboxId:fd8c012e46279448c931f423b7c0a3edc3c50acd52330a1662a5e1fbda7f2d21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960414537805680,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cee7be91cef6a4df95a6f30e1245ac788cdf5b471cc32cf8e6c534b463530dc4,PodSandboxId:f5c11184c99c8cd96632f007ad2b6681d1781872c599eb669f69cea3d2db1ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960414000354059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dxbt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84373054-e72e-469a-bf2f-101943117851,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2381ec99fe28ee5575c33d65237e0e562a5a6ea70fbcc8da25e91c230b77cee9,PodSandboxId:b95b02a1645a8a880ea12c901e2ae926652ee0b858e60387e7814bb1bbcdc516,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960413932382183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5p2vn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8f7749c2-4cb3-4372-8144-46109f9b89b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:046b89ea511cf12db05b7fa3ffd5bef78b13fe226543cc3e898fac1885518f19,PodSandboxId:c151165840d433405994ae76c44ab066d8587d2fef8d08bb7ae04099359e6b87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724960413117330203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqbn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b48a1f-725b-45b7-8a3f-0df0f3371d2f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13be5321a8c80c0020d830f57402d3a50b64f997dca0b352f757d34343265afa,PodSandboxId:9035323cb17ef464700e3e048a60728bebc848c7e0a0e9ab6728f05cc9b1e490,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960402364816845
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e975e1c07c22e3743cd74281083965,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426978e9357aa278ce81b3fee9d9f96b0a2cd12753daf1617d02becfc623cbb2,PodSandboxId:816f69dd8790e2dbf99e15b68637854971abfda4bcb97800f0397dc9414a0134,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172496040234
1690608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c5943896148d6b6042e7091fd9bb931,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e7abefb26ad39be0f3ebe8168138685a191ae8c59b8d277d341d9c157138f2,PodSandboxId:c8760b4447461beff647cfc37fff2080daffa0b27d206b8edb07279585f8e23a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172496
0402338040450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e12510189d6529718bce6143f8cb7f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9dc4baf0d69e3869f713b211d9f67465b96abe843f5578b9be3ebb9b8f0126,PodSandboxId:c5e190c693572af8beafaaa3d5eabece379ea814116846d388f8f8c76533ae93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960402320080199,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c253f2be8a7770ae53a7ebe3387b8b9ffac2c47439aab92c182013ed3f9266,PodSandboxId:54135f9d2371b0dabaf8853e269037ee4dd3572c016f276a4863e20e9593559c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960118921682055,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b16678ed-f698-4c13-bbca-1e2186362cfe name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.652569532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffd01135-5cd8-4d1e-a305-d4899191201f name=/runtime.v1.RuntimeService/Version
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.652653411Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffd01135-5cd8-4d1e-a305-d4899191201f name=/runtime.v1.RuntimeService/Version
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.653584512Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=738167f9-0e0c-405a-b8c7-a537e35a90da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.654111541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961430654084167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=738167f9-0e0c-405a-b8c7-a537e35a90da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.654690925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0455aca-5089-4469-9481-155e97680731 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.654755344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0455aca-5089-4469-9481-155e97680731 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:57:10 default-k8s-diff-port-672127 crio[713]: time="2024-08-29 19:57:10.655053016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:618b4f781c25c0bae32f14d563f92b13c00f2f8ba3cb26883d763e52b32aa53a,PodSandboxId:fd8c012e46279448c931f423b7c0a3edc3c50acd52330a1662a5e1fbda7f2d21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960414537805680,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cee7be91cef6a4df95a6f30e1245ac788cdf5b471cc32cf8e6c534b463530dc4,PodSandboxId:f5c11184c99c8cd96632f007ad2b6681d1781872c599eb669f69cea3d2db1ab3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960414000354059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dxbt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84373054-e72e-469a-bf2f-101943117851,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2381ec99fe28ee5575c33d65237e0e562a5a6ea70fbcc8da25e91c230b77cee9,PodSandboxId:b95b02a1645a8a880ea12c901e2ae926652ee0b858e60387e7814bb1bbcdc516,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960413932382183,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5p2vn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 8f7749c2-4cb3-4372-8144-46109f9b89b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:046b89ea511cf12db05b7fa3ffd5bef78b13fe226543cc3e898fac1885518f19,PodSandboxId:c151165840d433405994ae76c44ab066d8587d2fef8d08bb7ae04099359e6b87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724960413117330203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nqbn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5b48a1f-725b-45b7-8a3f-0df0f3371d2f,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13be5321a8c80c0020d830f57402d3a50b64f997dca0b352f757d34343265afa,PodSandboxId:9035323cb17ef464700e3e048a60728bebc848c7e0a0e9ab6728f05cc9b1e490,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960402364816845
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e975e1c07c22e3743cd74281083965,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426978e9357aa278ce81b3fee9d9f96b0a2cd12753daf1617d02becfc623cbb2,PodSandboxId:816f69dd8790e2dbf99e15b68637854971abfda4bcb97800f0397dc9414a0134,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172496040234
1690608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c5943896148d6b6042e7091fd9bb931,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e7abefb26ad39be0f3ebe8168138685a191ae8c59b8d277d341d9c157138f2,PodSandboxId:c8760b4447461beff647cfc37fff2080daffa0b27d206b8edb07279585f8e23a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172496
0402338040450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58e12510189d6529718bce6143f8cb7f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e9dc4baf0d69e3869f713b211d9f67465b96abe843f5578b9be3ebb9b8f0126,PodSandboxId:c5e190c693572af8beafaaa3d5eabece379ea814116846d388f8f8c76533ae93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960402320080199,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21c253f2be8a7770ae53a7ebe3387b8b9ffac2c47439aab92c182013ed3f9266,PodSandboxId:54135f9d2371b0dabaf8853e269037ee4dd3572c016f276a4863e20e9593559c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960118921682055,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-672127,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4ec219d9e38e4d7ee9c30017b9357f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0455aca-5089-4469-9481-155e97680731 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	618b4f781c25c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   fd8c012e46279       storage-provisioner
	cee7be91cef6a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   f5c11184c99c8       coredns-6f6b679f8f-dxbt5
	2381ec99fe28e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   b95b02a1645a8       coredns-6f6b679f8f-5p2vn
	046b89ea511cf       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   16 minutes ago      Running             kube-proxy                0                   c151165840d43       kube-proxy-nqbn4
	13be5321a8c80       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   17 minutes ago      Running             kube-scheduler            2                   9035323cb17ef       kube-scheduler-default-k8s-diff-port-672127
	426978e9357aa       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   17 minutes ago      Running             kube-controller-manager   2                   816f69dd8790e       kube-controller-manager-default-k8s-diff-port-672127
	e5e7abefb26ad       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   c8760b4447461       etcd-default-k8s-diff-port-672127
	8e9dc4baf0d69       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   17 minutes ago      Running             kube-apiserver            2                   c5e190c693572       kube-apiserver-default-k8s-diff-port-672127
	21c253f2be8a7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 minutes ago      Exited              kube-apiserver            1                   54135f9d2371b       kube-apiserver-default-k8s-diff-port-672127
	
	
	==> coredns [2381ec99fe28ee5575c33d65237e0e562a5a6ea70fbcc8da25e91c230b77cee9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [cee7be91cef6a4df95a6f30e1245ac788cdf5b471cc32cf8e6c534b463530dc4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-672127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-672127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=default-k8s-diff-port-672127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_40_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:40:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-672127
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:57:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:55:34 +0000   Thu, 29 Aug 2024 19:40:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:55:34 +0000   Thu, 29 Aug 2024 19:40:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:55:34 +0000   Thu, 29 Aug 2024 19:40:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:55:34 +0000   Thu, 29 Aug 2024 19:40:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.70
	  Hostname:    default-k8s-diff-port-672127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 24f919e2aeac4008a7f67717f493f871
	  System UUID:                24f919e2-aeac-4008-a7f6-7717f493f871
	  Boot ID:                    bd93af7b-a144-4151-8829-b1780c1e1219
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5p2vn                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-dxbt5                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-default-k8s-diff-port-672127                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-672127             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-672127    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-nqbn4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-672127             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-4p8qr                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node default-k8s-diff-port-672127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node default-k8s-diff-port-672127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node default-k8s-diff-port-672127 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-672127 event: Registered Node default-k8s-diff-port-672127 in Controller
	
	
	==> dmesg <==
	[  +0.054744] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039908] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.835762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.933018] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Aug29 19:35] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.307917] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.062221] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071459] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.178657] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.149530] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.319149] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +4.080498] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +2.405199] systemd-fstab-generator[911]: Ignoring "noauto" option for root device
	[  +0.069002] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.018328] kauditd_printk_skb: 92 callbacks suppressed
	[  +6.567639] kauditd_printk_skb: 62 callbacks suppressed
	[Aug29 19:40] systemd-fstab-generator[2585]: Ignoring "noauto" option for root device
	[  +0.064460] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.995451] systemd-fstab-generator[2905]: Ignoring "noauto" option for root device
	[  +0.080138] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.262341] systemd-fstab-generator[3016]: Ignoring "noauto" option for root device
	[  +0.115783] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.237308] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [e5e7abefb26ad39be0f3ebe8168138685a191ae8c59b8d277d341d9c157138f2] <==
	{"level":"info","ts":"2024-08-29T19:40:03.359038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"755e1e1acc6a8bb3 became leader at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:03.359045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 755e1e1acc6a8bb3 elected leader 755e1e1acc6a8bb3 at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:03.363087Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:03.365194Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"755e1e1acc6a8bb3","local-member-attributes":"{Name:default-k8s-diff-port-672127 ClientURLs:[https://192.168.50.70:2379]}","request-path":"/0/members/755e1e1acc6a8bb3/attributes","cluster-id":"43413f533dca4641","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:40:03.365606Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:40:03.368744Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:40:03.369502Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:40:03.377135Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.70:2379"}
	{"level":"info","ts":"2024-08-29T19:40:03.377658Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:40:03.378417Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T19:40:03.369536Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"43413f533dca4641","local-member-id":"755e1e1acc6a8bb3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:03.378580Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:03.378618Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:03.369991Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:40:03.391024Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T19:50:03.543692Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":685}
	{"level":"info","ts":"2024-08-29T19:50:03.552108Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":685,"took":"8.052578ms","hash":1029992589,"current-db-size-bytes":2154496,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2154496,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-29T19:50:03.552179Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1029992589,"revision":685,"compact-revision":-1}
	{"level":"info","ts":"2024-08-29T19:55:03.552076Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":928}
	{"level":"info","ts":"2024-08-29T19:55:03.556689Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":928,"took":"3.918245ms","hash":3252425268,"current-db-size-bytes":2154496,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-08-29T19:55:03.556784Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3252425268,"revision":928,"compact-revision":685}
	{"level":"info","ts":"2024-08-29T19:56:47.768593Z","caller":"traceutil/trace.go:171","msg":"trace[1087114469] transaction","detail":"{read_only:false; response_revision:1260; number_of_response:1; }","duration":"110.346155ms","start":"2024-08-29T19:56:47.658210Z","end":"2024-08-29T19:56:47.768556Z","steps":["trace[1087114469] 'process raft request'  (duration: 110.208463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T19:56:48.152650Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.890223ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10066549706989567309 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-672127\" mod_revision:1253 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-672127\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-672127\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-29T19:56:48.152764Z","caller":"traceutil/trace.go:171","msg":"trace[182619479] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"352.251512ms","start":"2024-08-29T19:56:47.800498Z","end":"2024-08-29T19:56:48.152749Z","steps":["trace[182619479] 'process raft request'  (duration: 97.478922ms)","trace[182619479] 'compare'  (duration: 253.761ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T19:56:48.152823Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-29T19:56:47.800480Z","time spent":"352.309446ms","remote":"127.0.0.1:42708","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-672127\" mod_revision:1253 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-672127\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-672127\" > >"}
	
	
	==> kernel <==
	 19:57:10 up 22 min,  0 users,  load average: 0.00, 0.04, 0.07
	Linux default-k8s-diff-port-672127 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [21c253f2be8a7770ae53a7ebe3387b8b9ffac2c47439aab92c182013ed3f9266] <==
	W0829 19:39:58.799458       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.806073       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.815632       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.849290       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.913853       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.959124       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.987908       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:58.996577       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.040732       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.073684       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.094129       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.116858       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.157003       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.167572       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.237048       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.254633       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.312152       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.314859       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.433268       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.434635       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.521083       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.545894       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.575188       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.675761       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:39:59.767501       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8e9dc4baf0d69e3869f713b211d9f67465b96abe843f5578b9be3ebb9b8f0126] <==
	I0829 19:53:05.926569       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:53:05.926568       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:55:04.926446       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:55:04.926587       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 19:55:05.929087       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:55:05.929232       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 19:55:05.929138       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:55:05.929314       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 19:55:05.930454       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:55:05.930479       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:56:05.931646       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:56:05.931756       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0829 19:56:05.931992       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:56:05.932091       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0829 19:56:05.932898       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:56:05.934076       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [426978e9357aa278ce81b3fee9d9f96b0a2cd12753daf1617d02becfc623cbb2] <==
	E0829 19:51:41.906098       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:51:42.455653       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:52:11.912857       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:52:12.462908       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:52:41.921022       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:52:42.474342       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:53:11.927107       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:53:12.482693       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:53:41.933468       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:53:42.489473       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:54:11.939882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:54:12.498018       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:54:41.946108       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:54:42.506386       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:55:11.951789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:55:12.513614       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:55:34.889193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-672127"
	E0829 19:55:41.960161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:55:42.520833       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:56:11.967135       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:56:12.528520       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:56:25.459219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="281.603µs"
	I0829 19:56:37.459685       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="86.023µs"
	E0829 19:56:41.974060       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:56:42.538340       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [046b89ea511cf12db05b7fa3ffd5bef78b13fe226543cc3e898fac1885518f19] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:40:13.487962       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:40:13.498462       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.70"]
	E0829 19:40:13.498538       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:40:13.594201       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:40:13.594240       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:40:13.594292       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:40:13.599117       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:40:13.599341       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:40:13.599363       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:40:13.601247       1 config.go:197] "Starting service config controller"
	I0829 19:40:13.601275       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:40:13.601292       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:40:13.601302       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:40:13.601707       1 config.go:326] "Starting node config controller"
	I0829 19:40:13.601733       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:40:13.701455       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 19:40:13.701512       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:40:13.702630       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [13be5321a8c80c0020d830f57402d3a50b64f997dca0b352f757d34343265afa] <==
	W0829 19:40:04.946019       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 19:40:04.946133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.786413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 19:40:05.786455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.787813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:05.787890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.849241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0829 19:40:05.849369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.878250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 19:40:05.878545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.897536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 19:40:05.898037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.972522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 19:40:05.972654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.981736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 19:40:05.981869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:05.984647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 19:40:05.984804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:06.171369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:06.171467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:06.182311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:06.182406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:06.215292       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 19:40:06.215420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0829 19:40:06.537626       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:56:11 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:11.456008    2912 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 29 19:56:11 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:11.456381    2912 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 29 19:56:11 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:11.457141    2912 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kbwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-4p8qr_kube-system(8026c5c8-9f02-45a1-8cc8-9d485dc49cbd): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 29 19:56:11 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:11.458459    2912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-4p8qr" podUID="8026c5c8-9f02-45a1-8cc8-9d485dc49cbd"
	Aug 29 19:56:17 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:17.682546    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961377681824337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:17 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:17.682849    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961377681824337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:25 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:25.443512    2912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4p8qr" podUID="8026c5c8-9f02-45a1-8cc8-9d485dc49cbd"
	Aug 29 19:56:27 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:27.684017    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961387683746567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:27 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:27.684060    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961387683746567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:37 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:37.442690    2912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4p8qr" podUID="8026c5c8-9f02-45a1-8cc8-9d485dc49cbd"
	Aug 29 19:56:37 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:37.685800    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961397685358638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:37 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:37.685861    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961397685358638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:47 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:47.687589    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961407687315272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:47 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:47.687635    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961407687315272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:50 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:50.442813    2912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4p8qr" podUID="8026c5c8-9f02-45a1-8cc8-9d485dc49cbd"
	Aug 29 19:56:57 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:57.688992    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961417688543962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:56:57 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:56:57.689490    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961417688543962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:57:03 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:57:03.444617    2912 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4p8qr" podUID="8026c5c8-9f02-45a1-8cc8-9d485dc49cbd"
	Aug 29 19:57:07 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:57:07.455907    2912 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:57:07 default-k8s-diff-port-672127 kubelet[2912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:57:07 default-k8s-diff-port-672127 kubelet[2912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:57:07 default-k8s-diff-port-672127 kubelet[2912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:57:07 default-k8s-diff-port-672127 kubelet[2912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:57:07 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:57:07.690956    2912 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961427690690301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:57:07 default-k8s-diff-port-672127 kubelet[2912]: E0829 19:57:07.691003    2912 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961427690690301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [618b4f781c25c0bae32f14d563f92b13c00f2f8ba3cb26883d763e52b32aa53a] <==
	I0829 19:40:14.705277       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 19:40:14.716397       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 19:40:14.716851       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 19:40:14.729832       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 19:40:14.730064       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-672127_2bf8c601-7827-4d3c-9539-177b2122de9c!
	I0829 19:40:14.732719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c355f18f-abcb-4c93-bc0a-543056a89838", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-672127_2bf8c601-7827-4d3c-9539-177b2122de9c became leader
	I0829 19:40:14.830806       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-672127_2bf8c601-7827-4d3c-9539-177b2122de9c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-672127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4p8qr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-672127 describe pod metrics-server-6867b74b74-4p8qr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-672127 describe pod metrics-server-6867b74b74-4p8qr: exit status 1 (56.989915ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4p8qr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-672127 describe pod metrics-server-6867b74b74-4p8qr: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (465.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (330.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-690795 -n no-preload-690795
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-29 19:55:40.554393406 +0000 UTC m=+6613.275186187
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-690795 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-690795 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.488µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-690795 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-690795 -n no-preload-690795
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-690795 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-690795 logs -n 25: (1.188116313s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo find                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo crio                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-633326                                       | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	| delete  | -p                                                     | disable-driver-mounts-831934 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | disable-driver-mounts-831934                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:28 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-690795             | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-920571            | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-672127  | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC | 29 Aug 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC |                     |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-690795                  | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC | 29 Aug 24 19:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-920571                 | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC | 29 Aug 24 19:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467349        | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-672127       | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:40 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467349             | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:55 UTC | 29 Aug 24 19:55 UTC |
	| start   | -p newest-cni-371258 --memory=2200 --alsologtostderr   | newest-cni-371258            | jenkins | v1.33.1 | 29 Aug 24 19:55 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:55:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:55:29.742036   86086 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:55:29.742313   86086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:55:29.742323   86086 out.go:358] Setting ErrFile to fd 2...
	I0829 19:55:29.742330   86086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:55:29.742502   86086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:55:29.743122   86086 out.go:352] Setting JSON to false
	I0829 19:55:29.744105   86086 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9477,"bootTime":1724951853,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:55:29.744159   86086 start.go:139] virtualization: kvm guest
	I0829 19:55:29.746512   86086 out.go:177] * [newest-cni-371258] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:55:29.748001   86086 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:55:29.748005   86086 notify.go:220] Checking for updates...
	I0829 19:55:29.750692   86086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:55:29.752015   86086 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:55:29.753400   86086 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:55:29.754655   86086 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:55:29.755850   86086 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:55:29.757493   86086 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:55:29.757606   86086 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:55:29.757729   86086 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:55:29.757822   86086 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:55:29.794597   86086 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 19:55:29.795791   86086 start.go:297] selected driver: kvm2
	I0829 19:55:29.795807   86086 start.go:901] validating driver "kvm2" against <nil>
	I0829 19:55:29.795819   86086 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:55:29.796533   86086 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:55:29.796619   86086 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:55:29.811208   86086 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:55:29.811260   86086 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0829 19:55:29.811290   86086 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0829 19:55:29.811557   86086 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0829 19:55:29.811592   86086 cni.go:84] Creating CNI manager for ""
	I0829 19:55:29.811602   86086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:55:29.811614   86086 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 19:55:29.811660   86086 start.go:340] cluster config:
	{Name:newest-cni-371258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-371258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:55:29.811756   86086 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:55:29.813380   86086 out.go:177] * Starting "newest-cni-371258" primary control-plane node in "newest-cni-371258" cluster
	I0829 19:55:29.814608   86086 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:55:29.814662   86086 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:55:29.814674   86086 cache.go:56] Caching tarball of preloaded images
	I0829 19:55:29.814757   86086 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:55:29.814767   86086 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0829 19:55:29.814852   86086 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258/config.json ...
	I0829 19:55:29.814871   86086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/newest-cni-371258/config.json: {Name:mk455de8c085078f967251ed96112054fa9c83c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:55:29.814998   86086 start.go:360] acquireMachinesLock for newest-cni-371258: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:55:29.815023   86086 start.go:364] duration metric: took 14.062µs to acquireMachinesLock for "newest-cni-371258"
	I0829 19:55:29.815039   86086 start.go:93] Provisioning new machine with config: &{Name:newest-cni-371258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:newest-cni-371258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:55:29.815090   86086 start.go:125] createHost starting for "" (driver="kvm2")
	I0829 19:55:29.816753   86086 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 19:55:29.816880   86086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:55:29.816922   86086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:55:29.832301   86086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46481
	I0829 19:55:29.832760   86086 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:55:29.833320   86086 main.go:141] libmachine: Using API Version  1
	I0829 19:55:29.833339   86086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:55:29.833722   86086 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:55:29.833898   86086 main.go:141] libmachine: (newest-cni-371258) Calling .GetMachineName
	I0829 19:55:29.834050   86086 main.go:141] libmachine: (newest-cni-371258) Calling .DriverName
	I0829 19:55:29.834213   86086 start.go:159] libmachine.API.Create for "newest-cni-371258" (driver="kvm2")
	I0829 19:55:29.834244   86086 client.go:168] LocalClient.Create starting
	I0829 19:55:29.834277   86086 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem
	I0829 19:55:29.834321   86086 main.go:141] libmachine: Decoding PEM data...
	I0829 19:55:29.834348   86086 main.go:141] libmachine: Parsing certificate...
	I0829 19:55:29.834419   86086 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem
	I0829 19:55:29.834446   86086 main.go:141] libmachine: Decoding PEM data...
	I0829 19:55:29.834461   86086 main.go:141] libmachine: Parsing certificate...
	I0829 19:55:29.834486   86086 main.go:141] libmachine: Running pre-create checks...
	I0829 19:55:29.834506   86086 main.go:141] libmachine: (newest-cni-371258) Calling .PreCreateCheck
	I0829 19:55:29.834895   86086 main.go:141] libmachine: (newest-cni-371258) Calling .GetConfigRaw
	I0829 19:55:29.835328   86086 main.go:141] libmachine: Creating machine...
	I0829 19:55:29.835344   86086 main.go:141] libmachine: (newest-cni-371258) Calling .Create
	I0829 19:55:29.835481   86086 main.go:141] libmachine: (newest-cni-371258) Creating KVM machine...
	I0829 19:55:29.836844   86086 main.go:141] libmachine: (newest-cni-371258) DBG | found existing default KVM network
	I0829 19:55:29.837987   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:29.837806   86109 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:dd:4a:df} reservation:<nil>}
	I0829 19:55:29.838926   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:29.838833   86109 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:5b:f2:15} reservation:<nil>}
	I0829 19:55:29.839584   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:29.839519   86109 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:0c:73:36} reservation:<nil>}
	I0829 19:55:29.840558   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:29.840501   86109 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003523d0}
	I0829 19:55:29.840579   86086 main.go:141] libmachine: (newest-cni-371258) DBG | created network xml: 
	I0829 19:55:29.840585   86086 main.go:141] libmachine: (newest-cni-371258) DBG | <network>
	I0829 19:55:29.840593   86086 main.go:141] libmachine: (newest-cni-371258) DBG |   <name>mk-newest-cni-371258</name>
	I0829 19:55:29.840603   86086 main.go:141] libmachine: (newest-cni-371258) DBG |   <dns enable='no'/>
	I0829 19:55:29.840613   86086 main.go:141] libmachine: (newest-cni-371258) DBG |   
	I0829 19:55:29.840623   86086 main.go:141] libmachine: (newest-cni-371258) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0829 19:55:29.840636   86086 main.go:141] libmachine: (newest-cni-371258) DBG |     <dhcp>
	I0829 19:55:29.840647   86086 main.go:141] libmachine: (newest-cni-371258) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0829 19:55:29.840658   86086 main.go:141] libmachine: (newest-cni-371258) DBG |     </dhcp>
	I0829 19:55:29.840672   86086 main.go:141] libmachine: (newest-cni-371258) DBG |   </ip>
	I0829 19:55:29.840703   86086 main.go:141] libmachine: (newest-cni-371258) DBG |   
	I0829 19:55:29.840727   86086 main.go:141] libmachine: (newest-cni-371258) DBG | </network>
	I0829 19:55:29.840741   86086 main.go:141] libmachine: (newest-cni-371258) DBG | 
	I0829 19:55:29.846611   86086 main.go:141] libmachine: (newest-cni-371258) DBG | trying to create private KVM network mk-newest-cni-371258 192.168.72.0/24...
	I0829 19:55:29.916880   86086 main.go:141] libmachine: (newest-cni-371258) DBG | private KVM network mk-newest-cni-371258 192.168.72.0/24 created
	I0829 19:55:29.916916   86086 main.go:141] libmachine: (newest-cni-371258) Setting up store path in /home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258 ...
	I0829 19:55:29.916933   86086 main.go:141] libmachine: (newest-cni-371258) Building disk image from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 19:55:29.917005   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:29.916887   86109 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:55:29.917139   86086 main.go:141] libmachine: (newest-cni-371258) Downloading /home/jenkins/minikube-integration/19531-13056/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0829 19:55:30.149873   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:30.149719   86109 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/id_rsa...
	I0829 19:55:30.311741   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:30.311621   86109 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/newest-cni-371258.rawdisk...
	I0829 19:55:30.311767   86086 main.go:141] libmachine: (newest-cni-371258) DBG | Writing magic tar header
	I0829 19:55:30.311780   86086 main.go:141] libmachine: (newest-cni-371258) DBG | Writing SSH key tar header
	I0829 19:55:30.311792   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:30.311757   86109 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258 ...
	I0829 19:55:30.311933   86086 main.go:141] libmachine: (newest-cni-371258) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258
	I0829 19:55:30.311981   86086 main.go:141] libmachine: (newest-cni-371258) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube/machines
	I0829 19:55:30.312003   86086 main.go:141] libmachine: (newest-cni-371258) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258 (perms=drwx------)
	I0829 19:55:30.312018   86086 main.go:141] libmachine: (newest-cni-371258) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:55:30.312035   86086 main.go:141] libmachine: (newest-cni-371258) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19531-13056
	I0829 19:55:30.312047   86086 main.go:141] libmachine: (newest-cni-371258) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0829 19:55:30.312058   86086 main.go:141] libmachine: (newest-cni-371258) DBG | Checking permissions on dir: /home/jenkins
	I0829 19:55:30.312069   86086 main.go:141] libmachine: (newest-cni-371258) DBG | Checking permissions on dir: /home
	I0829 19:55:30.312085   86086 main.go:141] libmachine: (newest-cni-371258) DBG | Skipping /home - not owner
	I0829 19:55:30.312098   86086 main.go:141] libmachine: (newest-cni-371258) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube/machines (perms=drwxr-xr-x)
	I0829 19:55:30.312114   86086 main.go:141] libmachine: (newest-cni-371258) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056/.minikube (perms=drwxr-xr-x)
	I0829 19:55:30.312126   86086 main.go:141] libmachine: (newest-cni-371258) Setting executable bit set on /home/jenkins/minikube-integration/19531-13056 (perms=drwxrwxr-x)
	I0829 19:55:30.312138   86086 main.go:141] libmachine: (newest-cni-371258) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0829 19:55:30.312149   86086 main.go:141] libmachine: (newest-cni-371258) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0829 19:55:30.312162   86086 main.go:141] libmachine: (newest-cni-371258) Creating domain...
	I0829 19:55:30.313199   86086 main.go:141] libmachine: (newest-cni-371258) define libvirt domain using xml: 
	I0829 19:55:30.313221   86086 main.go:141] libmachine: (newest-cni-371258) <domain type='kvm'>
	I0829 19:55:30.313231   86086 main.go:141] libmachine: (newest-cni-371258)   <name>newest-cni-371258</name>
	I0829 19:55:30.313239   86086 main.go:141] libmachine: (newest-cni-371258)   <memory unit='MiB'>2200</memory>
	I0829 19:55:30.313248   86086 main.go:141] libmachine: (newest-cni-371258)   <vcpu>2</vcpu>
	I0829 19:55:30.313258   86086 main.go:141] libmachine: (newest-cni-371258)   <features>
	I0829 19:55:30.313282   86086 main.go:141] libmachine: (newest-cni-371258)     <acpi/>
	I0829 19:55:30.313293   86086 main.go:141] libmachine: (newest-cni-371258)     <apic/>
	I0829 19:55:30.313302   86086 main.go:141] libmachine: (newest-cni-371258)     <pae/>
	I0829 19:55:30.313309   86086 main.go:141] libmachine: (newest-cni-371258)     
	I0829 19:55:30.313321   86086 main.go:141] libmachine: (newest-cni-371258)   </features>
	I0829 19:55:30.313336   86086 main.go:141] libmachine: (newest-cni-371258)   <cpu mode='host-passthrough'>
	I0829 19:55:30.313365   86086 main.go:141] libmachine: (newest-cni-371258)   
	I0829 19:55:30.313388   86086 main.go:141] libmachine: (newest-cni-371258)   </cpu>
	I0829 19:55:30.313399   86086 main.go:141] libmachine: (newest-cni-371258)   <os>
	I0829 19:55:30.313410   86086 main.go:141] libmachine: (newest-cni-371258)     <type>hvm</type>
	I0829 19:55:30.313422   86086 main.go:141] libmachine: (newest-cni-371258)     <boot dev='cdrom'/>
	I0829 19:55:30.313428   86086 main.go:141] libmachine: (newest-cni-371258)     <boot dev='hd'/>
	I0829 19:55:30.313436   86086 main.go:141] libmachine: (newest-cni-371258)     <bootmenu enable='no'/>
	I0829 19:55:30.313456   86086 main.go:141] libmachine: (newest-cni-371258)   </os>
	I0829 19:55:30.313517   86086 main.go:141] libmachine: (newest-cni-371258)   <devices>
	I0829 19:55:30.313543   86086 main.go:141] libmachine: (newest-cni-371258)     <disk type='file' device='cdrom'>
	I0829 19:55:30.313574   86086 main.go:141] libmachine: (newest-cni-371258)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/boot2docker.iso'/>
	I0829 19:55:30.313589   86086 main.go:141] libmachine: (newest-cni-371258)       <target dev='hdc' bus='scsi'/>
	I0829 19:55:30.313598   86086 main.go:141] libmachine: (newest-cni-371258)       <readonly/>
	I0829 19:55:30.313608   86086 main.go:141] libmachine: (newest-cni-371258)     </disk>
	I0829 19:55:30.313618   86086 main.go:141] libmachine: (newest-cni-371258)     <disk type='file' device='disk'>
	I0829 19:55:30.313638   86086 main.go:141] libmachine: (newest-cni-371258)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0829 19:55:30.313653   86086 main.go:141] libmachine: (newest-cni-371258)       <source file='/home/jenkins/minikube-integration/19531-13056/.minikube/machines/newest-cni-371258/newest-cni-371258.rawdisk'/>
	I0829 19:55:30.313662   86086 main.go:141] libmachine: (newest-cni-371258)       <target dev='hda' bus='virtio'/>
	I0829 19:55:30.313667   86086 main.go:141] libmachine: (newest-cni-371258)     </disk>
	I0829 19:55:30.313672   86086 main.go:141] libmachine: (newest-cni-371258)     <interface type='network'>
	I0829 19:55:30.313680   86086 main.go:141] libmachine: (newest-cni-371258)       <source network='mk-newest-cni-371258'/>
	I0829 19:55:30.313686   86086 main.go:141] libmachine: (newest-cni-371258)       <model type='virtio'/>
	I0829 19:55:30.313693   86086 main.go:141] libmachine: (newest-cni-371258)     </interface>
	I0829 19:55:30.313701   86086 main.go:141] libmachine: (newest-cni-371258)     <interface type='network'>
	I0829 19:55:30.313725   86086 main.go:141] libmachine: (newest-cni-371258)       <source network='default'/>
	I0829 19:55:30.313743   86086 main.go:141] libmachine: (newest-cni-371258)       <model type='virtio'/>
	I0829 19:55:30.313755   86086 main.go:141] libmachine: (newest-cni-371258)     </interface>
	I0829 19:55:30.313763   86086 main.go:141] libmachine: (newest-cni-371258)     <serial type='pty'>
	I0829 19:55:30.313776   86086 main.go:141] libmachine: (newest-cni-371258)       <target port='0'/>
	I0829 19:55:30.313785   86086 main.go:141] libmachine: (newest-cni-371258)     </serial>
	I0829 19:55:30.313794   86086 main.go:141] libmachine: (newest-cni-371258)     <console type='pty'>
	I0829 19:55:30.313805   86086 main.go:141] libmachine: (newest-cni-371258)       <target type='serial' port='0'/>
	I0829 19:55:30.313815   86086 main.go:141] libmachine: (newest-cni-371258)     </console>
	I0829 19:55:30.313830   86086 main.go:141] libmachine: (newest-cni-371258)     <rng model='virtio'>
	I0829 19:55:30.313844   86086 main.go:141] libmachine: (newest-cni-371258)       <backend model='random'>/dev/random</backend>
	I0829 19:55:30.313853   86086 main.go:141] libmachine: (newest-cni-371258)     </rng>
	I0829 19:55:30.313861   86086 main.go:141] libmachine: (newest-cni-371258)     
	I0829 19:55:30.313871   86086 main.go:141] libmachine: (newest-cni-371258)     
	I0829 19:55:30.313880   86086 main.go:141] libmachine: (newest-cni-371258)   </devices>
	I0829 19:55:30.313890   86086 main.go:141] libmachine: (newest-cni-371258) </domain>
	I0829 19:55:30.313897   86086 main.go:141] libmachine: (newest-cni-371258) 
	I0829 19:55:30.318691   86086 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:05:c2:9f in network default
	I0829 19:55:30.319350   86086 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:55:30.319367   86086 main.go:141] libmachine: (newest-cni-371258) Ensuring networks are active...
	I0829 19:55:30.320202   86086 main.go:141] libmachine: (newest-cni-371258) Ensuring network default is active
	I0829 19:55:30.320636   86086 main.go:141] libmachine: (newest-cni-371258) Ensuring network mk-newest-cni-371258 is active
	I0829 19:55:30.321150   86086 main.go:141] libmachine: (newest-cni-371258) Getting domain xml...
	I0829 19:55:30.322056   86086 main.go:141] libmachine: (newest-cni-371258) Creating domain...
	I0829 19:55:31.556238   86086 main.go:141] libmachine: (newest-cni-371258) Waiting to get IP...
	I0829 19:55:31.557233   86086 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:55:31.557680   86086 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:55:31.557712   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:31.557665   86109 retry.go:31] will retry after 195.959532ms: waiting for machine to come up
	I0829 19:55:31.755040   86086 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:55:31.755550   86086 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:55:31.755575   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:31.755503   86109 retry.go:31] will retry after 342.718033ms: waiting for machine to come up
	I0829 19:55:32.100015   86086 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:55:32.100574   86086 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:55:32.100600   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:32.100449   86109 retry.go:31] will retry after 366.441691ms: waiting for machine to come up
	I0829 19:55:32.469235   86086 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:55:32.469702   86086 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:55:32.469730   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:32.469653   86109 retry.go:31] will retry after 479.250711ms: waiting for machine to come up
	I0829 19:55:32.950193   86086 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:55:32.950697   86086 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:55:32.950722   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:32.950657   86109 retry.go:31] will retry after 729.445922ms: waiting for machine to come up
	I0829 19:55:33.681521   86086 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:55:33.682015   86086 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:55:33.682054   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:33.681968   86109 retry.go:31] will retry after 908.642305ms: waiting for machine to come up
	I0829 19:55:34.591752   86086 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:55:34.592134   86086 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:55:34.592162   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:34.592087   86109 retry.go:31] will retry after 779.274146ms: waiting for machine to come up
	I0829 19:55:35.372541   86086 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:55:35.373072   86086 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:55:35.373100   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:35.373033   86109 retry.go:31] will retry after 1.430465864s: waiting for machine to come up
	I0829 19:55:36.805611   86086 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:55:36.806068   86086 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:55:36.806116   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:36.806035   86109 retry.go:31] will retry after 1.646108851s: waiting for machine to come up
	I0829 19:55:38.453348   86086 main.go:141] libmachine: (newest-cni-371258) DBG | domain newest-cni-371258 has defined MAC address 52:54:00:3f:71:aa in network mk-newest-cni-371258
	I0829 19:55:38.453782   86086 main.go:141] libmachine: (newest-cni-371258) DBG | unable to find current IP address of domain newest-cni-371258 in network mk-newest-cni-371258
	I0829 19:55:38.453809   86086 main.go:141] libmachine: (newest-cni-371258) DBG | I0829 19:55:38.453748   86109 retry.go:31] will retry after 2.257137554s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.140402991Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961341140379599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1af6f42e-3441-4436-a599-63ba6f6c25d3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.140835179Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49b4165a-0516-4325-ac47-0aeecde75ba5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.140884128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49b4165a-0516-4325-ac47-0aeecde75ba5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.141092164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7153d12c98b69899781efbf229ff785521f418d3f4f6373cdd42e7b17d8cab3,PodSandboxId:869495f955c23c928b1a5b85448e5b02ef53b037f90d2f16093fe38b46eac4ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960463843338881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df10c563-06d8-48f8-a6e4-35837195a25d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c065e8e725e4c7b37b01611fadc6a952adf6b719f61020ed65d7a79d37b36c,PodSandboxId:d73820d8e93438f7c6b9ace66232f78f0407facf3747dcf6c32c423764c01124,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463443220855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xbfb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94d281f-1fdb-4e33-a060-17cd5981462c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2757f3d6106ea797679c630cda14c06892595552124c8f6363208e1470fe2a6d,PodSandboxId:f81e54f62ae9d4ea268da933c7437d7cc36ba2397eb7dfbeeb4000da1fa8face,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463343529596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wr7bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
054ab5-3a0e-433e-add6-5817ce6f1c27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379ceac562879d338e481d72acdd211b0b77321d4436c0ba341c0bd027ed7655,PodSandboxId:ecbad8a0de810d8e9ea61613f3f1ce982d2f315ef81ea908faf0f099297f5c99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724960462474443560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p7zvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14f4576d-3d3e-4848-9350-3348293318aa,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d4eab307b8f18b7f92c586a0902bd87842177a6290ff676b12de0255d342067,PodSandboxId:0ed5b1e684ff805d190bf27319d4e24203a1f9a62bd3aa19e4a4781e697c7d17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960451681552476,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf0ae6e4d317bbcee09566ba701bf597691d5ed553759a0a22fd8c66999ab99,PodSandboxId:25bff46e36d60f08eece0830369452f8dc9fa2a8b4bb363d44bdd25d944f8a68,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960451656824746,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb18701278f40660cece17f9f33a9849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc5a190d459d28edf348021984dc04796f426b8b304e5e640402838981e7264,PodSandboxId:52f0b1fe265e2cf716fdb7dcc0146a85b1f50e0cc1d61c67696325bc940ed54b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960451601993886,Labels:map[string]string{io.kubernetes.container.name: kube-schedu
ler,io.kubernetes.pod.name: kube-scheduler-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae0a7e63f1193ff8ddd81724cfe2882,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c721a7921b378e4504e2d4610f4b2df8074b382778a4718ba3b2b2ddd95f930,PodSandboxId:a32c159a1743213c74d746926e8a872ff7f179a5409dc7f35b30c17033897679,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960451563529690,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0d41bb860df3e9b29440eb119ab23f7,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ef809174f4a96fac0d3e8a1adb78b736dbf31c58ae6a58d3bb4025f49f9dff,PodSandboxId:fff2e8c50b000ea95ab09d804b9ea35aac68cfa27db0a9246c2aa66265b19c98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960166797120635,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49b4165a-0516-4325-ac47-0aeecde75ba5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.177468510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51eb6e76-4559-4785-8deb-b33819e66b93 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.177545498Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51eb6e76-4559-4785-8deb-b33819e66b93 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.178890017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0049ce37-9fe1-45e4-a060-a3fda8263e52 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.179301533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961341179275925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0049ce37-9fe1-45e4-a060-a3fda8263e52 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.179904328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=782c6e10-0cf0-489d-bbea-0d600fc4bdf9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.179990632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=782c6e10-0cf0-489d-bbea-0d600fc4bdf9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.181111460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7153d12c98b69899781efbf229ff785521f418d3f4f6373cdd42e7b17d8cab3,PodSandboxId:869495f955c23c928b1a5b85448e5b02ef53b037f90d2f16093fe38b46eac4ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960463843338881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df10c563-06d8-48f8-a6e4-35837195a25d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c065e8e725e4c7b37b01611fadc6a952adf6b719f61020ed65d7a79d37b36c,PodSandboxId:d73820d8e93438f7c6b9ace66232f78f0407facf3747dcf6c32c423764c01124,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463443220855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xbfb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94d281f-1fdb-4e33-a060-17cd5981462c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2757f3d6106ea797679c630cda14c06892595552124c8f6363208e1470fe2a6d,PodSandboxId:f81e54f62ae9d4ea268da933c7437d7cc36ba2397eb7dfbeeb4000da1fa8face,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463343529596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wr7bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
054ab5-3a0e-433e-add6-5817ce6f1c27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379ceac562879d338e481d72acdd211b0b77321d4436c0ba341c0bd027ed7655,PodSandboxId:ecbad8a0de810d8e9ea61613f3f1ce982d2f315ef81ea908faf0f099297f5c99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724960462474443560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p7zvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14f4576d-3d3e-4848-9350-3348293318aa,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d4eab307b8f18b7f92c586a0902bd87842177a6290ff676b12de0255d342067,PodSandboxId:0ed5b1e684ff805d190bf27319d4e24203a1f9a62bd3aa19e4a4781e697c7d17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960451681552476,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf0ae6e4d317bbcee09566ba701bf597691d5ed553759a0a22fd8c66999ab99,PodSandboxId:25bff46e36d60f08eece0830369452f8dc9fa2a8b4bb363d44bdd25d944f8a68,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960451656824746,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb18701278f40660cece17f9f33a9849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc5a190d459d28edf348021984dc04796f426b8b304e5e640402838981e7264,PodSandboxId:52f0b1fe265e2cf716fdb7dcc0146a85b1f50e0cc1d61c67696325bc940ed54b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960451601993886,Labels:map[string]string{io.kubernetes.container.name: kube-schedu
ler,io.kubernetes.pod.name: kube-scheduler-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae0a7e63f1193ff8ddd81724cfe2882,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c721a7921b378e4504e2d4610f4b2df8074b382778a4718ba3b2b2ddd95f930,PodSandboxId:a32c159a1743213c74d746926e8a872ff7f179a5409dc7f35b30c17033897679,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960451563529690,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0d41bb860df3e9b29440eb119ab23f7,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ef809174f4a96fac0d3e8a1adb78b736dbf31c58ae6a58d3bb4025f49f9dff,PodSandboxId:fff2e8c50b000ea95ab09d804b9ea35aac68cfa27db0a9246c2aa66265b19c98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960166797120635,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=782c6e10-0cf0-489d-bbea-0d600fc4bdf9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.218948429Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a39b12cb-3eb7-4765-95b6-248763c57ae0 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.219021558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a39b12cb-3eb7-4765-95b6-248763c57ae0 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.220203493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb1309db-9bea-4fd2-8c49-2d11f03d4fc5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.220526989Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961341220499093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb1309db-9bea-4fd2-8c49-2d11f03d4fc5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.221066682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc1218aa-766c-4751-9582-10299cb65ae8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.221116748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc1218aa-766c-4751-9582-10299cb65ae8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.221296903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7153d12c98b69899781efbf229ff785521f418d3f4f6373cdd42e7b17d8cab3,PodSandboxId:869495f955c23c928b1a5b85448e5b02ef53b037f90d2f16093fe38b46eac4ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960463843338881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df10c563-06d8-48f8-a6e4-35837195a25d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c065e8e725e4c7b37b01611fadc6a952adf6b719f61020ed65d7a79d37b36c,PodSandboxId:d73820d8e93438f7c6b9ace66232f78f0407facf3747dcf6c32c423764c01124,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463443220855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xbfb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94d281f-1fdb-4e33-a060-17cd5981462c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2757f3d6106ea797679c630cda14c06892595552124c8f6363208e1470fe2a6d,PodSandboxId:f81e54f62ae9d4ea268da933c7437d7cc36ba2397eb7dfbeeb4000da1fa8face,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463343529596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wr7bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
054ab5-3a0e-433e-add6-5817ce6f1c27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379ceac562879d338e481d72acdd211b0b77321d4436c0ba341c0bd027ed7655,PodSandboxId:ecbad8a0de810d8e9ea61613f3f1ce982d2f315ef81ea908faf0f099297f5c99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724960462474443560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p7zvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14f4576d-3d3e-4848-9350-3348293318aa,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d4eab307b8f18b7f92c586a0902bd87842177a6290ff676b12de0255d342067,PodSandboxId:0ed5b1e684ff805d190bf27319d4e24203a1f9a62bd3aa19e4a4781e697c7d17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960451681552476,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf0ae6e4d317bbcee09566ba701bf597691d5ed553759a0a22fd8c66999ab99,PodSandboxId:25bff46e36d60f08eece0830369452f8dc9fa2a8b4bb363d44bdd25d944f8a68,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960451656824746,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb18701278f40660cece17f9f33a9849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc5a190d459d28edf348021984dc04796f426b8b304e5e640402838981e7264,PodSandboxId:52f0b1fe265e2cf716fdb7dcc0146a85b1f50e0cc1d61c67696325bc940ed54b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960451601993886,Labels:map[string]string{io.kubernetes.container.name: kube-schedu
ler,io.kubernetes.pod.name: kube-scheduler-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae0a7e63f1193ff8ddd81724cfe2882,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c721a7921b378e4504e2d4610f4b2df8074b382778a4718ba3b2b2ddd95f930,PodSandboxId:a32c159a1743213c74d746926e8a872ff7f179a5409dc7f35b30c17033897679,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960451563529690,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0d41bb860df3e9b29440eb119ab23f7,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ef809174f4a96fac0d3e8a1adb78b736dbf31c58ae6a58d3bb4025f49f9dff,PodSandboxId:fff2e8c50b000ea95ab09d804b9ea35aac68cfa27db0a9246c2aa66265b19c98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960166797120635,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc1218aa-766c-4751-9582-10299cb65ae8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.252608564Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e859c3b7-3f28-4a06-b5ba-0fc2984b2ba5 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.252732208Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e859c3b7-3f28-4a06-b5ba-0fc2984b2ba5 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.253581222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be0f2648-37c5-4434-b04a-f0d16eb30e22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.253957812Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961341253938005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be0f2648-37c5-4434-b04a-f0d16eb30e22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.254419286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17169d0f-a361-4c8b-bcdf-6263ff3b09ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.254484804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17169d0f-a361-4c8b-bcdf-6263ff3b09ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:41 no-preload-690795 crio[700]: time="2024-08-29 19:55:41.254673105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7153d12c98b69899781efbf229ff785521f418d3f4f6373cdd42e7b17d8cab3,PodSandboxId:869495f955c23c928b1a5b85448e5b02ef53b037f90d2f16093fe38b46eac4ab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724960463843338881,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df10c563-06d8-48f8-a6e4-35837195a25d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c065e8e725e4c7b37b01611fadc6a952adf6b719f61020ed65d7a79d37b36c,PodSandboxId:d73820d8e93438f7c6b9ace66232f78f0407facf3747dcf6c32c423764c01124,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463443220855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xbfb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94d281f-1fdb-4e33-a060-17cd5981462c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2757f3d6106ea797679c630cda14c06892595552124c8f6363208e1470fe2a6d,PodSandboxId:f81e54f62ae9d4ea268da933c7437d7cc36ba2397eb7dfbeeb4000da1fa8face,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724960463343529596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wr7bq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac
054ab5-3a0e-433e-add6-5817ce6f1c27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:379ceac562879d338e481d72acdd211b0b77321d4436c0ba341c0bd027ed7655,PodSandboxId:ecbad8a0de810d8e9ea61613f3f1ce982d2f315ef81ea908faf0f099297f5c99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1724960462474443560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p7zvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14f4576d-3d3e-4848-9350-3348293318aa,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d4eab307b8f18b7f92c586a0902bd87842177a6290ff676b12de0255d342067,PodSandboxId:0ed5b1e684ff805d190bf27319d4e24203a1f9a62bd3aa19e4a4781e697c7d17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724960451681552476,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf0ae6e4d317bbcee09566ba701bf597691d5ed553759a0a22fd8c66999ab99,PodSandboxId:25bff46e36d60f08eece0830369452f8dc9fa2a8b4bb363d44bdd25d944f8a68,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724960451656824746,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb18701278f40660cece17f9f33a9849,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fc5a190d459d28edf348021984dc04796f426b8b304e5e640402838981e7264,PodSandboxId:52f0b1fe265e2cf716fdb7dcc0146a85b1f50e0cc1d61c67696325bc940ed54b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724960451601993886,Labels:map[string]string{io.kubernetes.container.name: kube-schedu
ler,io.kubernetes.pod.name: kube-scheduler-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ae0a7e63f1193ff8ddd81724cfe2882,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c721a7921b378e4504e2d4610f4b2df8074b382778a4718ba3b2b2ddd95f930,PodSandboxId:a32c159a1743213c74d746926e8a872ff7f179a5409dc7f35b30c17033897679,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724960451563529690,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0d41bb860df3e9b29440eb119ab23f7,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ef809174f4a96fac0d3e8a1adb78b736dbf31c58ae6a58d3bb4025f49f9dff,PodSandboxId:fff2e8c50b000ea95ab09d804b9ea35aac68cfa27db0a9246c2aa66265b19c98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724960166797120635,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-690795,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4928e1a3894803c986516977eda8be,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17169d0f-a361-4c8b-bcdf-6263ff3b09ce name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a7153d12c98b6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   869495f955c23       storage-provisioner
	89c065e8e725e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   d73820d8e9343       coredns-6f6b679f8f-xbfb6
	2757f3d6106ea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   f81e54f62ae9d       coredns-6f6b679f8f-wr7bq
	379ceac562879       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   14 minutes ago      Running             kube-proxy                0                   ecbad8a0de810       kube-proxy-p7zvh
	1d4eab307b8f1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Running             kube-apiserver            2                   0ed5b1e684ff8       kube-apiserver-no-preload-690795
	dbf0ae6e4d317       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   25bff46e36d60       etcd-no-preload-690795
	1fc5a190d459d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   14 minutes ago      Running             kube-scheduler            2                   52f0b1fe265e2       kube-scheduler-no-preload-690795
	3c721a7921b37       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   14 minutes ago      Running             kube-controller-manager   2                   a32c159a17432       kube-controller-manager-no-preload-690795
	e3ef809174f4a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   19 minutes ago      Exited              kube-apiserver            1                   fff2e8c50b000       kube-apiserver-no-preload-690795
	
	
	==> coredns [2757f3d6106ea797679c630cda14c06892595552124c8f6363208e1470fe2a6d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [89c065e8e725e4c7b37b01611fadc6a952adf6b719f61020ed65d7a79d37b36c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-690795
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-690795
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=no-preload-690795
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T19_40_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 19:40:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-690795
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:55:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:51:19 +0000   Thu, 29 Aug 2024 19:40:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:51:19 +0000   Thu, 29 Aug 2024 19:40:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:51:19 +0000   Thu, 29 Aug 2024 19:40:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:51:19 +0000   Thu, 29 Aug 2024 19:40:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    no-preload-690795
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 08c9c91c767f460fabd230675217c2db
	  System UUID:                08c9c91c-767f-460f-abd2-30675217c2db
	  Boot ID:                    d952d251-7c4e-41f9-b9b6-e5d5f68dd90d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-wr7bq                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-xbfb6                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-690795                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-690795             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-690795    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-p7zvh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-690795             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-shs88              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-690795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-690795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-690795 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node no-preload-690795 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node no-preload-690795 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node no-preload-690795 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-690795 event: Registered Node no-preload-690795 in Controller
	
	
	==> dmesg <==
	[  +0.040737] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.049458] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.923752] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.535091] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.717454] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.069561] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068501] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.177179] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.153110] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.265381] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[Aug29 19:36] systemd-fstab-generator[1276]: Ignoring "noauto" option for root device
	[  +0.061497] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.808195] systemd-fstab-generator[1399]: Ignoring "noauto" option for root device
	[  +3.641246] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.179925] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.136970] kauditd_printk_skb: 26 callbacks suppressed
	[Aug29 19:40] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.291139] systemd-fstab-generator[3053]: Ignoring "noauto" option for root device
	[  +4.590106] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.472947] systemd-fstab-generator[3373]: Ignoring "noauto" option for root device
	[Aug29 19:41] systemd-fstab-generator[3497]: Ignoring "noauto" option for root device
	[  +0.091905] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.804889] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [dbf0ae6e4d317bbcee09566ba701bf597691d5ed553759a0a22fd8c66999ab99] <==
	{"level":"info","ts":"2024-08-29T19:40:51.953366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 switched to configuration voters=(5694425758823909849)"}
	{"level":"info","ts":"2024-08-29T19:40:51.953472Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1be8679029844888","local-member-id":"4f06aa0eaa8889d9","added-peer-id":"4f06aa0eaa8889d9","added-peer-peer-urls":["https://192.168.39.76:2380"]}
	{"level":"info","ts":"2024-08-29T19:40:52.903579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-29T19:40:52.903730Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-29T19:40:52.903768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 received MsgPreVoteResp from 4f06aa0eaa8889d9 at term 1"}
	{"level":"info","ts":"2024-08-29T19:40:52.903805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 became candidate at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:52.903849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 received MsgVoteResp from 4f06aa0eaa8889d9 at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:52.903904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4f06aa0eaa8889d9 became leader at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:52.903939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4f06aa0eaa8889d9 elected leader 4f06aa0eaa8889d9 at term 2"}
	{"level":"info","ts":"2024-08-29T19:40:52.905822Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:52.906937Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1be8679029844888","local-member-id":"4f06aa0eaa8889d9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:52.907030Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:52.907078Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T19:40:52.907174Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4f06aa0eaa8889d9","local-member-attributes":"{Name:no-preload-690795 ClientURLs:[https://192.168.39.76:2379]}","request-path":"/0/members/4f06aa0eaa8889d9/attributes","cluster-id":"1be8679029844888","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T19:40:52.907225Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:40:52.907735Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T19:40:52.909294Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:40:52.910091Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T19:40:52.910254Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T19:40:52.910286Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T19:40:52.908668Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T19:40:52.911487Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.76:2379"}
	{"level":"info","ts":"2024-08-29T19:50:52.946407Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":725}
	{"level":"info","ts":"2024-08-29T19:50:52.955590Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":725,"took":"8.543892ms","hash":3188154585,"current-db-size-bytes":2285568,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2285568,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-08-29T19:50:52.955654Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3188154585,"revision":725,"compact-revision":-1}
	
	
	==> kernel <==
	 19:55:41 up 20 min,  0 users,  load average: 0.22, 0.24, 0.19
	Linux no-preload-690795 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1d4eab307b8f18b7f92c586a0902bd87842177a6290ff676b12de0255d342067] <==
	W0829 19:50:55.231120       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:50:55.231312       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 19:50:55.232145       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:50:55.233259       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:51:55.233205       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:51:55.233281       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0829 19:51:55.233568       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:51:55.233674       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 19:51:55.235363       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:51:55.235453       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0829 19:53:55.236326       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:53:55.236658       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0829 19:53:55.236934       1 handler_proxy.go:99] no RequestInfo found in the context
	E0829 19:53:55.237040       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0829 19:53:55.237879       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0829 19:53:55.238984       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [e3ef809174f4a96fac0d3e8a1adb78b736dbf31c58ae6a58d3bb4025f49f9dff] <==
	W0829 19:40:46.642754       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.678956       1 logging.go:55] [core] [Channel #15 SubChannel #17]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.723074       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.765069       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.820009       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.859222       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.866968       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.908462       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.916088       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.944918       1 logging.go:55] [core] [Channel #8 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:46.979034       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.026028       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.027348       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.095314       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.096729       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.166798       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.250565       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.272282       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.419007       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.445651       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.522368       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.644472       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.781414       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.788080       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0829 19:40:47.803535       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [3c721a7921b378e4504e2d4610f4b2df8074b382778a4718ba3b2b2ddd95f930] <==
	E0829 19:50:31.280930       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:50:31.764117       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:51:01.287067       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:51:01.772970       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:51:19.292617       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-690795"
	E0829 19:51:31.295378       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:51:31.780231       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:52:01.302476       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:52:01.787959       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0829 19:52:17.768844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="218.463µs"
	I0829 19:52:28.768863       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="142.252µs"
	E0829 19:52:31.308884       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:52:31.795208       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:53:01.317107       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:53:01.804066       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:53:31.323545       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:53:31.816398       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:54:01.331035       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:54:01.824450       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:54:31.337365       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:54:31.832640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:55:01.344599       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:55:01.842336       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0829 19:55:31.352988       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0829 19:55:31.855395       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [379ceac562879d338e481d72acdd211b0b77321d4436c0ba341c0bd027ed7655] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 19:41:02.946558       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 19:41:02.957958       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.76"]
	E0829 19:41:02.958024       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 19:41:03.060123       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 19:41:03.060171       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 19:41:03.060201       1 server_linux.go:169] "Using iptables Proxier"
	I0829 19:41:03.063327       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 19:41:03.063617       1 server.go:483] "Version info" version="v1.31.0"
	I0829 19:41:03.063631       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 19:41:03.066996       1 config.go:197] "Starting service config controller"
	I0829 19:41:03.067024       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 19:41:03.067055       1 config.go:104] "Starting endpoint slice config controller"
	I0829 19:41:03.067062       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 19:41:03.067744       1 config.go:326] "Starting node config controller"
	I0829 19:41:03.067753       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 19:41:03.169421       1 shared_informer.go:320] Caches are synced for node config
	I0829 19:41:03.169479       1 shared_informer.go:320] Caches are synced for service config
	I0829 19:41:03.169507       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1fc5a190d459d28edf348021984dc04796f426b8b304e5e640402838981e7264] <==
	W0829 19:40:54.244741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 19:40:54.244769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:54.244823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 19:40:54.244849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:54.244903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:54.244926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.118518       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:55.118573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.170012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 19:40:55.170129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.273646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 19:40:55.273846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.273852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:55.274012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.344649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 19:40:55.344732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.367721       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 19:40:55.367765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.411433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 19:40:55.411481       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.433878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 19:40:55.433925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 19:40:55.714630       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 19:40:55.714857       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 19:40:57.827881       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:54:27 no-preload-690795 kubelet[3380]: E0829 19:54:27.752575    3380 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shs88" podUID="cd53f408-7f8a-40ae-93f3-7a00c8ae6646"
	Aug 29 19:54:36 no-preload-690795 kubelet[3380]: E0829 19:54:36.999465    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961276999223438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:54:36 no-preload-690795 kubelet[3380]: E0829 19:54:36.999508    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961276999223438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:54:42 no-preload-690795 kubelet[3380]: E0829 19:54:42.752374    3380 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shs88" podUID="cd53f408-7f8a-40ae-93f3-7a00c8ae6646"
	Aug 29 19:54:47 no-preload-690795 kubelet[3380]: E0829 19:54:47.001266    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961287000814222,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:54:47 no-preload-690795 kubelet[3380]: E0829 19:54:47.001289    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961287000814222,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:54:54 no-preload-690795 kubelet[3380]: E0829 19:54:54.752056    3380 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shs88" podUID="cd53f408-7f8a-40ae-93f3-7a00c8ae6646"
	Aug 29 19:54:56 no-preload-690795 kubelet[3380]: E0829 19:54:56.794840    3380 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 19:54:56 no-preload-690795 kubelet[3380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 19:54:56 no-preload-690795 kubelet[3380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 19:54:56 no-preload-690795 kubelet[3380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 19:54:56 no-preload-690795 kubelet[3380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 19:54:57 no-preload-690795 kubelet[3380]: E0829 19:54:57.003170    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961297002626846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:54:57 no-preload-690795 kubelet[3380]: E0829 19:54:57.003210    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961297002626846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:07 no-preload-690795 kubelet[3380]: E0829 19:55:07.005509    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961307005065893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:07 no-preload-690795 kubelet[3380]: E0829 19:55:07.005897    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961307005065893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:09 no-preload-690795 kubelet[3380]: E0829 19:55:09.752373    3380 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shs88" podUID="cd53f408-7f8a-40ae-93f3-7a00c8ae6646"
	Aug 29 19:55:17 no-preload-690795 kubelet[3380]: E0829 19:55:17.007920    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961317007460402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:17 no-preload-690795 kubelet[3380]: E0829 19:55:17.008329    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961317007460402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:20 no-preload-690795 kubelet[3380]: E0829 19:55:20.752993    3380 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shs88" podUID="cd53f408-7f8a-40ae-93f3-7a00c8ae6646"
	Aug 29 19:55:27 no-preload-690795 kubelet[3380]: E0829 19:55:27.010193    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961327009738803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:27 no-preload-690795 kubelet[3380]: E0829 19:55:27.010813    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961327009738803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:32 no-preload-690795 kubelet[3380]: E0829 19:55:32.755009    3380 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shs88" podUID="cd53f408-7f8a-40ae-93f3-7a00c8ae6646"
	Aug 29 19:55:37 no-preload-690795 kubelet[3380]: E0829 19:55:37.013094    3380 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961337012547911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 29 19:55:37 no-preload-690795 kubelet[3380]: E0829 19:55:37.013132    3380 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961337012547911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [a7153d12c98b69899781efbf229ff785521f418d3f4f6373cdd42e7b17d8cab3] <==
	I0829 19:41:04.013795       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 19:41:04.028976       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 19:41:04.029139       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 19:41:04.037465       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 19:41:04.037608       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-690795_162ddbc5-60b9-43cc-a598-796c35f93279!
	I0829 19:41:04.042143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2bda510d-c5dd-4aa1-946c-691215f2b320", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-690795_162ddbc5-60b9-43cc-a598-796c35f93279 became leader
	I0829 19:41:04.138435       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-690795_162ddbc5-60b9-43cc-a598-796c35f93279!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-690795 -n no-preload-690795
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-690795 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-shs88
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-690795 describe pod metrics-server-6867b74b74-shs88
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-690795 describe pod metrics-server-6867b74b74-shs88: exit status 1 (59.832191ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-shs88" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-690795 describe pod metrics-server-6867b74b74-shs88: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (330.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (163.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:52:52.705361   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:53:03.951188   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:53:26.706810   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:53:50.041305   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:54:32.802390   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/calico-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:54:49.632527   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:54:57.000671   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
E0829 19:55:12.475156   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.112:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467349 -n old-k8s-version-467349
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 2 (232.732469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-467349" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-467349 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-467349 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.096µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-467349 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 2 (218.99424ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-467349 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-467349 logs -n 25: (1.52127316s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-633326 sudo cat                              | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo                                  | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo find                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-633326 sudo crio                             | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-633326                                       | bridge-633326                | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	| delete  | -p                                                     | disable-driver-mounts-831934 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | disable-driver-mounts-831934                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:28 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-690795             | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-920571            | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC | 29 Aug 24 19:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:27 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-672127  | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC | 29 Aug 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:28 UTC |                     |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-690795                  | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-690795                                   | no-preload-690795            | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC | 29 Aug 24 19:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-920571                 | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:29 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-920571                                  | embed-certs-920571           | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC | 29 Aug 24 19:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-467349        | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-672127       | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-672127 | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:40 UTC |
	|         | default-k8s-diff-port-672127                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-467349             | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC | 29 Aug 24 19:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-467349                              | old-k8s-version-467349       | jenkins | v1.33.1 | 29 Aug 24 19:31 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 19:31:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 19:31:58.737382   79869 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:31:58.737475   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737483   79869 out.go:358] Setting ErrFile to fd 2...
	I0829 19:31:58.737486   79869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:31:58.737664   79869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:31:58.738216   79869 out.go:352] Setting JSON to false
	I0829 19:31:58.739096   79869 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8066,"bootTime":1724951853,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:31:58.739164   79869 start.go:139] virtualization: kvm guest
	I0829 19:31:58.741047   79869 out.go:177] * [old-k8s-version-467349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:31:58.742202   79869 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:31:58.742202   79869 notify.go:220] Checking for updates...
	I0829 19:31:58.744035   79869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:31:58.745212   79869 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:31:58.746330   79869 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:31:58.747599   79869 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:31:58.748625   79869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:31:58.749897   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:31:58.750353   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.750402   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.765128   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I0829 19:31:58.765502   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.765933   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.765952   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.766302   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.766478   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.768195   79869 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0829 19:31:58.769230   79869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:31:58.769562   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:31:58.769599   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:31:58.783969   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42955
	I0829 19:31:58.784329   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:31:58.784794   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:31:58.784809   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:31:58.785130   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:31:58.785335   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:31:58.821467   79869 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 19:31:58.822695   79869 start.go:297] selected driver: kvm2
	I0829 19:31:58.822708   79869 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.822845   79869 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:31:58.823799   79869 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.823887   79869 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 19:31:58.839098   79869 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 19:31:58.839445   79869 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:31:58.839504   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:31:58.839519   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:31:58.839556   79869 start.go:340] cluster config:
	{Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:31:58.839650   79869 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 19:31:58.841263   79869 out.go:177] * Starting "old-k8s-version-467349" primary control-plane node in "old-k8s-version-467349" cluster
	I0829 19:31:58.842265   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:31:58.842301   79869 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 19:31:58.842310   79869 cache.go:56] Caching tarball of preloaded images
	I0829 19:31:58.842386   79869 preload.go:172] Found /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0829 19:31:58.842396   79869 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0829 19:31:58.842476   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:31:58.842637   79869 start.go:360] acquireMachinesLock for old-k8s-version-467349: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:32:00.606343   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:03.678411   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:09.758354   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:12.830416   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:18.910387   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:21.982407   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:28.062408   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:31.134407   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:37.214369   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:40.286345   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:46.366360   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:49.438406   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:55.518437   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:32:58.590377   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:04.670397   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:07.742436   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:13.822348   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:16.894422   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:22.974353   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:26.046337   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:32.126325   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:35.198391   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:41.278353   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:44.350421   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:50.434297   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:53.502296   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:33:59.582448   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:02.654443   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:08.734358   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:11.806435   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:17.886372   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:20.958351   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:27.038356   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:30.110387   78865 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.76:22: connect: no route to host
	I0829 19:34:33.114600   79073 start.go:364] duration metric: took 4m24.136110592s to acquireMachinesLock for "embed-certs-920571"
	I0829 19:34:33.114658   79073 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:34:33.114666   79073 fix.go:54] fixHost starting: 
	I0829 19:34:33.115014   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:33.115043   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:33.130652   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34641
	I0829 19:34:33.131096   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:33.131536   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:34:33.131555   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:33.131871   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:33.132060   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:33.132217   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:34:33.133784   79073 fix.go:112] recreateIfNeeded on embed-certs-920571: state=Stopped err=<nil>
	I0829 19:34:33.133809   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	W0829 19:34:33.133951   79073 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:34:33.135573   79073 out.go:177] * Restarting existing kvm2 VM for "embed-certs-920571" ...
	I0829 19:34:33.136726   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Start
	I0829 19:34:33.136873   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring networks are active...
	I0829 19:34:33.137613   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring network default is active
	I0829 19:34:33.137909   79073 main.go:141] libmachine: (embed-certs-920571) Ensuring network mk-embed-certs-920571 is active
	I0829 19:34:33.138400   79073 main.go:141] libmachine: (embed-certs-920571) Getting domain xml...
	I0829 19:34:33.139091   79073 main.go:141] libmachine: (embed-certs-920571) Creating domain...
	I0829 19:34:33.112327   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:34:33.112363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:34:33.112705   78865 buildroot.go:166] provisioning hostname "no-preload-690795"
	I0829 19:34:33.112736   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:34:33.112943   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:34:33.114457   78865 machine.go:96] duration metric: took 4m37.430735456s to provisionDockerMachine
	I0829 19:34:33.114505   78865 fix.go:56] duration metric: took 4m37.452542806s for fixHost
	I0829 19:34:33.114516   78865 start.go:83] releasing machines lock for "no-preload-690795", held for 4m37.452590646s
	W0829 19:34:33.114545   78865 start.go:714] error starting host: provision: host is not running
	W0829 19:34:33.114637   78865 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0829 19:34:33.114647   78865 start.go:729] Will try again in 5 seconds ...
	I0829 19:34:34.366249   79073 main.go:141] libmachine: (embed-certs-920571) Waiting to get IP...
	I0829 19:34:34.367233   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:34.367595   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:34.367671   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:34.367580   80412 retry.go:31] will retry after 294.1031ms: waiting for machine to come up
	I0829 19:34:34.663229   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:34.663677   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:34.663709   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:34.663624   80412 retry.go:31] will retry after 345.352879ms: waiting for machine to come up
	I0829 19:34:35.010102   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.010576   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.010604   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.010527   80412 retry.go:31] will retry after 295.49024ms: waiting for machine to come up
	I0829 19:34:35.308077   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.308580   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.308608   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.308525   80412 retry.go:31] will retry after 575.095429ms: waiting for machine to come up
	I0829 19:34:35.885400   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:35.885806   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:35.885835   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:35.885762   80412 retry.go:31] will retry after 524.463725ms: waiting for machine to come up
	I0829 19:34:36.411496   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:36.411840   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:36.411866   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:36.411802   80412 retry.go:31] will retry after 672.277111ms: waiting for machine to come up
	I0829 19:34:37.085978   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:37.086512   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:37.086537   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:37.086473   80412 retry.go:31] will retry after 1.185875442s: waiting for machine to come up
	I0829 19:34:38.274401   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:38.274881   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:38.274914   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:38.274827   80412 retry.go:31] will retry after 1.426721352s: waiting for machine to come up
	I0829 19:34:38.116486   78865 start.go:360] acquireMachinesLock for no-preload-690795: {Name:mk38acfe6f3224efe5d1f4856bd0bbbbc001f8f7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 19:34:39.703333   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:39.703732   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:39.703756   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:39.703691   80412 retry.go:31] will retry after 1.500429564s: waiting for machine to come up
	I0829 19:34:41.206311   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:41.206854   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:41.206882   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:41.206766   80412 retry.go:31] will retry after 2.021866027s: waiting for machine to come up
	I0829 19:34:43.230915   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:43.231329   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:43.231382   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:43.231318   80412 retry.go:31] will retry after 2.415112477s: waiting for machine to come up
	I0829 19:34:45.649815   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:45.650169   79073 main.go:141] libmachine: (embed-certs-920571) DBG | unable to find current IP address of domain embed-certs-920571 in network mk-embed-certs-920571
	I0829 19:34:45.650221   79073 main.go:141] libmachine: (embed-certs-920571) DBG | I0829 19:34:45.650140   80412 retry.go:31] will retry after 3.292956483s: waiting for machine to come up
	I0829 19:34:50.094786   79559 start.go:364] duration metric: took 3m31.488453615s to acquireMachinesLock for "default-k8s-diff-port-672127"
	I0829 19:34:50.094847   79559 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:34:50.094857   79559 fix.go:54] fixHost starting: 
	I0829 19:34:50.095330   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:34:50.095367   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:34:50.112044   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0829 19:34:50.112510   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:34:50.112941   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:34:50.112964   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:34:50.113325   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:34:50.113522   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:34:50.113663   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:34:50.115335   79559 fix.go:112] recreateIfNeeded on default-k8s-diff-port-672127: state=Stopped err=<nil>
	I0829 19:34:50.115378   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	W0829 19:34:50.115548   79559 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:34:50.117176   79559 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-672127" ...
	I0829 19:34:48.944274   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.944748   79073 main.go:141] libmachine: (embed-certs-920571) Found IP for machine: 192.168.61.243
	I0829 19:34:48.944776   79073 main.go:141] libmachine: (embed-certs-920571) Reserving static IP address...
	I0829 19:34:48.944793   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has current primary IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.945167   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "embed-certs-920571", mac: "52:54:00:35:28:22", ip: "192.168.61.243"} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:48.945195   79073 main.go:141] libmachine: (embed-certs-920571) Reserved static IP address: 192.168.61.243
	I0829 19:34:48.945214   79073 main.go:141] libmachine: (embed-certs-920571) DBG | skip adding static IP to network mk-embed-certs-920571 - found existing host DHCP lease matching {name: "embed-certs-920571", mac: "52:54:00:35:28:22", ip: "192.168.61.243"}
	I0829 19:34:48.945225   79073 main.go:141] libmachine: (embed-certs-920571) Waiting for SSH to be available...
	I0829 19:34:48.945236   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Getting to WaitForSSH function...
	I0829 19:34:48.947646   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.948004   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:48.948034   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:48.948132   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Using SSH client type: external
	I0829 19:34:48.948162   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa (-rw-------)
	I0829 19:34:48.948280   79073 main.go:141] libmachine: (embed-certs-920571) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:34:48.948307   79073 main.go:141] libmachine: (embed-certs-920571) DBG | About to run SSH command:
	I0829 19:34:48.948328   79073 main.go:141] libmachine: (embed-certs-920571) DBG | exit 0
	I0829 19:34:49.073781   79073 main.go:141] libmachine: (embed-certs-920571) DBG | SSH cmd err, output: <nil>: 
	I0829 19:34:49.074184   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetConfigRaw
	I0829 19:34:49.074813   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:49.077014   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.077349   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.077369   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.077550   79073 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/config.json ...
	I0829 19:34:49.077724   79073 machine.go:93] provisionDockerMachine start ...
	I0829 19:34:49.077739   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:49.077936   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.080112   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.080448   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.080472   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.080548   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.080715   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.080853   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.080983   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.081110   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.081294   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.081306   79073 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:34:49.182232   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:34:49.182282   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.182556   79073 buildroot.go:166] provisioning hostname "embed-certs-920571"
	I0829 19:34:49.182582   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.182783   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.185368   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.185727   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.185751   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.185901   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.186077   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.186237   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.186379   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.186505   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.186721   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.186740   79073 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-920571 && echo "embed-certs-920571" | sudo tee /etc/hostname
	I0829 19:34:49.300225   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-920571
	
	I0829 19:34:49.300261   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.303129   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.303497   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.303528   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.303682   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.303883   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.304061   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.304193   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.304466   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.304650   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.304667   79073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-920571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-920571/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-920571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:34:49.413678   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:34:49.413710   79073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:34:49.413765   79073 buildroot.go:174] setting up certificates
	I0829 19:34:49.413774   79073 provision.go:84] configureAuth start
	I0829 19:34:49.413786   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetMachineName
	I0829 19:34:49.414069   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:49.416618   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.416965   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.416993   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.417143   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.419308   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.419585   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.419630   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.419746   79073 provision.go:143] copyHostCerts
	I0829 19:34:49.419802   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:34:49.419820   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:34:49.419882   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:34:49.419973   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:34:49.419981   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:34:49.420005   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:34:49.420055   79073 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:34:49.420063   79073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:34:49.420083   79073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:34:49.420129   79073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.embed-certs-920571 san=[127.0.0.1 192.168.61.243 embed-certs-920571 localhost minikube]
	I0829 19:34:49.488345   79073 provision.go:177] copyRemoteCerts
	I0829 19:34:49.488396   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:34:49.488418   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.490954   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.491290   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.491328   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.491473   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.491667   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.491794   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.491932   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:49.571847   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:34:49.594401   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 19:34:49.615988   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:34:49.638030   79073 provision.go:87] duration metric: took 224.241128ms to configureAuth
	I0829 19:34:49.638058   79073 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:34:49.638251   79073 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:34:49.638342   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.640876   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.641214   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.641244   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.641439   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.641662   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.641818   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.641941   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.642126   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.642292   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.642307   79073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:34:49.862247   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:34:49.862276   79073 machine.go:96] duration metric: took 784.541058ms to provisionDockerMachine
	I0829 19:34:49.862286   79073 start.go:293] postStartSetup for "embed-certs-920571" (driver="kvm2")
	I0829 19:34:49.862296   79073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:34:49.862325   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:49.862632   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:34:49.862660   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.865463   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.865871   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.865899   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.866068   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.866285   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.866459   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.866644   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:49.948826   79073 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:34:49.952779   79073 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:34:49.952800   79073 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:34:49.952858   79073 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:34:49.952935   79073 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:34:49.953034   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:34:49.962083   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:34:49.986910   79073 start.go:296] duration metric: took 124.612025ms for postStartSetup
	I0829 19:34:49.986944   79073 fix.go:56] duration metric: took 16.872279139s for fixHost
	I0829 19:34:49.986964   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:49.989581   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.989919   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:49.989946   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:49.990080   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:49.990281   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.990519   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:49.990662   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:49.990835   79073 main.go:141] libmachine: Using SSH client type: native
	I0829 19:34:49.991009   79073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.243 22 <nil> <nil>}
	I0829 19:34:49.991020   79073 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:34:50.094598   79073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960090.067799977
	
	I0829 19:34:50.094618   79073 fix.go:216] guest clock: 1724960090.067799977
	I0829 19:34:50.094626   79073 fix.go:229] Guest: 2024-08-29 19:34:50.067799977 +0000 UTC Remote: 2024-08-29 19:34:49.98694779 +0000 UTC m=+281.148944887 (delta=80.852187ms)
	I0829 19:34:50.094667   79073 fix.go:200] guest clock delta is within tolerance: 80.852187ms
	I0829 19:34:50.094672   79073 start.go:83] releasing machines lock for "embed-certs-920571", held for 16.98003549s
	I0829 19:34:50.094697   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.094962   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:50.097867   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.098301   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.098331   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.098494   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099007   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099190   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:34:50.099276   79073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:34:50.099322   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:50.099429   79073 ssh_runner.go:195] Run: cat /version.json
	I0829 19:34:50.099453   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:34:50.101909   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.101932   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102283   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.102311   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102342   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:50.102363   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:50.102460   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:50.102556   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:34:50.102647   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:50.102717   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:34:50.102818   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:50.102899   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:34:50.102964   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:50.103032   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:34:50.178744   79073 ssh_runner.go:195] Run: systemctl --version
	I0829 19:34:50.220024   79073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:34:50.370308   79073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:34:50.379363   79073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:34:50.379435   79073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:34:50.394787   79073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:34:50.394810   79073 start.go:495] detecting cgroup driver to use...
	I0829 19:34:50.394886   79073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:34:50.410061   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:34:50.423846   79073 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:34:50.423910   79073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:34:50.437117   79073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:34:50.450318   79073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:34:50.563588   79073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:34:50.706261   79073 docker.go:233] disabling docker service ...
	I0829 19:34:50.706356   79073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:34:50.721443   79073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:34:50.734284   79073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:34:50.871611   79073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:34:51.006487   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:34:51.019543   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:34:51.036398   79073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:34:51.036444   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.045884   79073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:34:51.045931   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.055634   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.065379   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.075104   79073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:34:51.085560   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.095777   79073 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.114679   79073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:34:51.125695   79073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:34:51.135263   79073 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:34:51.135328   79073 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:34:51.148534   79073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:34:51.158658   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:34:51.281185   79073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:34:51.378558   79073 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:34:51.378618   79073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:34:51.383580   79073 start.go:563] Will wait 60s for crictl version
	I0829 19:34:51.383638   79073 ssh_runner.go:195] Run: which crictl
	I0829 19:34:51.387081   79073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:34:51.426413   79073 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:34:51.426491   79073 ssh_runner.go:195] Run: crio --version
	I0829 19:34:51.453777   79073 ssh_runner.go:195] Run: crio --version
	I0829 19:34:51.481306   79073 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:34:50.118500   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Start
	I0829 19:34:50.118776   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring networks are active...
	I0829 19:34:50.119618   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring network default is active
	I0829 19:34:50.120105   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Ensuring network mk-default-k8s-diff-port-672127 is active
	I0829 19:34:50.120501   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Getting domain xml...
	I0829 19:34:50.121238   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Creating domain...
	I0829 19:34:51.414344   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting to get IP...
	I0829 19:34:51.415308   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.415709   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.415790   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:51.415692   80540 retry.go:31] will retry after 256.92247ms: waiting for machine to come up
	I0829 19:34:51.674173   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.674728   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:51.674754   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:51.674670   80540 retry.go:31] will retry after 338.812858ms: waiting for machine to come up
	I0829 19:34:52.015450   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.015977   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.016009   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.015920   80540 retry.go:31] will retry after 385.497306ms: waiting for machine to come up
	I0829 19:34:52.403718   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.404324   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.404361   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.404259   80540 retry.go:31] will retry after 536.615454ms: waiting for machine to come up
	I0829 19:34:52.943166   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.943709   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:52.943736   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:52.943678   80540 retry.go:31] will retry after 584.895039ms: waiting for machine to come up
	I0829 19:34:51.482485   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetIP
	I0829 19:34:51.485272   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:51.485599   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:34:51.485632   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:34:51.485803   79073 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0829 19:34:51.490493   79073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:34:51.505212   79073 kubeadm.go:883] updating cluster {Name:embed-certs-920571 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:34:51.505359   79073 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:34:51.505413   79073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:34:51.539415   79073 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:34:51.539485   79073 ssh_runner.go:195] Run: which lz4
	I0829 19:34:51.543107   79073 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:34:51.546831   79073 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:34:51.546864   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:34:52.815579   79073 crio.go:462] duration metric: took 1.272496626s to copy over tarball
	I0829 19:34:52.815659   79073 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:34:53.530873   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:53.531510   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:53.531540   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:53.531452   80540 retry.go:31] will retry after 790.882954ms: waiting for machine to come up
	I0829 19:34:54.324385   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:54.324785   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:54.324817   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:54.324706   80540 retry.go:31] will retry after 815.842176ms: waiting for machine to come up
	I0829 19:34:55.142878   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:55.143350   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:55.143375   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:55.143325   80540 retry.go:31] will retry after 1.177682749s: waiting for machine to come up
	I0829 19:34:56.322780   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:56.323215   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:56.323248   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:56.323160   80540 retry.go:31] will retry after 1.158169512s: waiting for machine to come up
	I0829 19:34:57.483529   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:57.483990   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:57.484023   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:57.483917   80540 retry.go:31] will retry after 1.631842784s: waiting for machine to come up
	I0829 19:34:54.931044   79073 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.115353131s)
	I0829 19:34:54.931077   79073 crio.go:469] duration metric: took 2.115468165s to extract the tarball
	I0829 19:34:54.931086   79073 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:34:54.967902   79073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:34:55.006987   79073 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:34:55.007010   79073 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:34:55.007017   79073 kubeadm.go:934] updating node { 192.168.61.243 8443 v1.31.0 crio true true} ...
	I0829 19:34:55.007123   79073 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-920571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:34:55.007187   79073 ssh_runner.go:195] Run: crio config
	I0829 19:34:55.051987   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:34:55.052016   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:34:55.052039   79073 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:34:55.052077   79073 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.243 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-920571 NodeName:embed-certs-920571 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:34:55.052254   79073 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-920571"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:34:55.052337   79073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:34:55.061509   79073 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:34:55.061586   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:34:55.070182   79073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0829 19:34:55.086180   79073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:34:55.103184   79073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0829 19:34:55.119226   79073 ssh_runner.go:195] Run: grep 192.168.61.243	control-plane.minikube.internal$ /etc/hosts
	I0829 19:34:55.122845   79073 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:34:55.133782   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:34:55.266431   79073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:34:55.283043   79073 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571 for IP: 192.168.61.243
	I0829 19:34:55.283066   79073 certs.go:194] generating shared ca certs ...
	I0829 19:34:55.283081   79073 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:34:55.283237   79073 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:34:55.283287   79073 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:34:55.283297   79073 certs.go:256] generating profile certs ...
	I0829 19:34:55.283438   79073 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/client.key
	I0829 19:34:55.283519   79073 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.key.dda9dcff
	I0829 19:34:55.283573   79073 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.key
	I0829 19:34:55.283708   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:34:55.283773   79073 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:34:55.283793   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:34:55.283831   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:34:55.283869   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:34:55.283901   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:34:55.283957   79073 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:34:55.284835   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:34:55.330384   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:34:55.366718   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:34:55.393815   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:34:55.436855   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0829 19:34:55.463343   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:34:55.487693   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:34:55.511657   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/embed-certs-920571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:34:55.536017   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:34:55.558298   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:34:55.579840   79073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:34:55.601271   79073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:34:55.616634   79073 ssh_runner.go:195] Run: openssl version
	I0829 19:34:55.621890   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:34:55.633224   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.637431   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.637486   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:34:55.643034   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:34:55.654607   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:34:55.666297   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.670433   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.670492   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:34:55.675787   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:34:55.686953   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:34:55.697241   79073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.701133   79073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.701189   79073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:34:55.706242   79073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:34:55.716165   79073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:34:55.720159   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:34:55.727612   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:34:55.734806   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:34:55.742352   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:34:55.749483   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:34:55.756543   79073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:34:55.763413   79073 kubeadm.go:392] StartCluster: {Name:embed-certs-920571 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-920571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:34:55.763499   79073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:34:55.763537   79073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:34:55.803136   79073 cri.go:89] found id: ""
	I0829 19:34:55.803219   79073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:34:55.812851   79073 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:34:55.812868   79073 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:34:55.812907   79073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:34:55.823461   79073 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:34:55.824969   79073 kubeconfig.go:125] found "embed-certs-920571" server: "https://192.168.61.243:8443"
	I0829 19:34:55.828095   79073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:34:55.838579   79073 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.243
	I0829 19:34:55.838616   79073 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:34:55.838626   79073 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:34:55.838669   79073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:34:55.876618   79073 cri.go:89] found id: ""
	I0829 19:34:55.876674   79073 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:34:55.893401   79073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:34:55.902557   79073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:34:55.902579   79073 kubeadm.go:157] found existing configuration files:
	
	I0829 19:34:55.902631   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:34:55.911349   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:34:55.911407   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:34:55.920377   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:34:55.928764   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:34:55.928824   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:34:55.937630   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:34:55.945836   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:34:55.945897   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:34:55.954491   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:34:55.962466   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:34:55.962517   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:34:55.971080   79073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:34:55.979709   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:56.086301   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.378119   79073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.29178222s)
	I0829 19:34:57.378153   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.574026   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.655499   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:34:57.755371   79073 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:34:57.755457   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:58.255939   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:58.755813   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.117916   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:34:59.118404   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:34:59.118427   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:34:59.118355   80540 retry.go:31] will retry after 2.806936823s: waiting for machine to come up
	I0829 19:35:01.927079   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:01.927450   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:35:01.927473   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:35:01.927422   80540 retry.go:31] will retry after 3.008556566s: waiting for machine to come up
	I0829 19:34:59.255536   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.756296   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:34:59.802484   79073 api_server.go:72] duration metric: took 2.047112988s to wait for apiserver process to appear ...
	I0829 19:34:59.802516   79073 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:34:59.802537   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:34:59.803088   79073 api_server.go:269] stopped: https://192.168.61.243:8443/healthz: Get "https://192.168.61.243:8443/healthz": dial tcp 192.168.61.243:8443: connect: connection refused
	I0829 19:35:00.302707   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.439793   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:02.439825   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:02.439837   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.482217   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:02.482245   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:02.802617   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:02.811079   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:02.811116   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:03.303128   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:03.307613   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:03.307657   79073 api_server.go:103] status: https://192.168.61.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:03.803189   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:35:03.809164   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0829 19:35:03.816623   79073 api_server.go:141] control plane version: v1.31.0
	I0829 19:35:03.816649   79073 api_server.go:131] duration metric: took 4.014126212s to wait for apiserver health ...
	I0829 19:35:03.816657   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:35:03.816664   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:03.818484   79073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:35:03.819706   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:35:03.833365   79073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:35:03.851607   79073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:35:03.861274   79073 system_pods.go:59] 8 kube-system pods found
	I0829 19:35:03.861313   79073 system_pods.go:61] "coredns-6f6b679f8f-2wrn6" [05e03841-faab-4fd4-88c9-199b39a71ba6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:35:03.861320   79073 system_pods.go:61] "etcd-embed-certs-920571" [5545a51a-3b76-4b39-b347-6f68b8d7edbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:35:03.861328   79073 system_pods.go:61] "kube-apiserver-embed-certs-920571" [cecb3e4e-9d55-4dc9-8d14-884ffbf56475] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:35:03.861334   79073 system_pods.go:61] "kube-controller-manager-embed-certs-920571" [77e06ace-0262-418f-b41c-700aabf2fa1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:35:03.861338   79073 system_pods.go:61] "kube-proxy-hflpk" [a57a1785-8ccf-4955-b5b2-19c72032d9f5] Running
	I0829 19:35:03.861353   79073 system_pods.go:61] "kube-scheduler-embed-certs-920571" [bdb2ed9c-3bf2-4e91-b6a4-ba947dab93ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:35:03.861359   79073 system_pods.go:61] "metrics-server-6867b74b74-xs5gp" [98380519-4a65-4208-b9cc-f1941a5c2f01] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:35:03.861362   79073 system_pods.go:61] "storage-provisioner" [d18a769f-283f-4db3-aad0-82fc0267980f] Running
	I0829 19:35:03.861368   79073 system_pods.go:74] duration metric: took 9.738329ms to wait for pod list to return data ...
	I0829 19:35:03.861375   79073 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:35:03.865311   79073 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:35:03.865341   79073 node_conditions.go:123] node cpu capacity is 2
	I0829 19:35:03.865355   79073 node_conditions.go:105] duration metric: took 3.974661ms to run NodePressure ...
	I0829 19:35:03.865373   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:04.939084   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:04.939532   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | unable to find current IP address of domain default-k8s-diff-port-672127 in network mk-default-k8s-diff-port-672127
	I0829 19:35:04.939567   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | I0829 19:35:04.939479   80540 retry.go:31] will retry after 3.738266407s: waiting for machine to come up
	I0829 19:35:04.123411   79073 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:35:04.127613   79073 kubeadm.go:739] kubelet initialised
	I0829 19:35:04.127639   79073 kubeadm.go:740] duration metric: took 4.197494ms waiting for restarted kubelet to initialise ...
	I0829 19:35:04.127649   79073 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:35:04.132339   79073 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.136884   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.136909   79073 pod_ready.go:82] duration metric: took 4.548897ms for pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.136917   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "coredns-6f6b679f8f-2wrn6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.136927   79073 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.141014   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "etcd-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.141037   79073 pod_ready.go:82] duration metric: took 4.103179ms for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.141048   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "etcd-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.141062   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.144778   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.144799   79073 pod_ready.go:82] duration metric: took 3.728001ms for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.144807   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.144812   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.255204   79073 pod_ready.go:98] node "embed-certs-920571" hosting pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.255227   79073 pod_ready.go:82] duration metric: took 110.408053ms for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:04.255247   79073 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-920571" hosting pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-920571" has status "Ready":"False"
	I0829 19:35:04.255253   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hflpk" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.656086   79073 pod_ready.go:93] pod "kube-proxy-hflpk" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:04.656124   79073 pod_ready.go:82] duration metric: took 400.860776ms for pod "kube-proxy-hflpk" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:04.656137   79073 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:06.674533   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:09.990963   79869 start.go:364] duration metric: took 3m11.14829615s to acquireMachinesLock for "old-k8s-version-467349"
	I0829 19:35:09.991026   79869 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:09.991035   79869 fix.go:54] fixHost starting: 
	I0829 19:35:09.991429   79869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:09.991472   79869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:10.011456   79869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0829 19:35:10.011867   79869 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:10.012413   79869 main.go:141] libmachine: Using API Version  1
	I0829 19:35:10.012445   79869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:10.012752   79869 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:10.012960   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:10.013132   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetState
	I0829 19:35:10.014878   79869 fix.go:112] recreateIfNeeded on old-k8s-version-467349: state=Stopped err=<nil>
	I0829 19:35:10.014907   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	W0829 19:35:10.015055   79869 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:10.016684   79869 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467349" ...
	I0829 19:35:08.681559   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.682042   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Found IP for machine: 192.168.50.70
	I0829 19:35:08.682070   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has current primary IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.682080   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Reserving static IP address...
	I0829 19:35:08.682524   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Reserved static IP address: 192.168.50.70
	I0829 19:35:08.682564   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-672127", mac: "52:54:00:db:a8:cf", ip: "192.168.50.70"} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.682580   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Waiting for SSH to be available...
	I0829 19:35:08.682609   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | skip adding static IP to network mk-default-k8s-diff-port-672127 - found existing host DHCP lease matching {name: "default-k8s-diff-port-672127", mac: "52:54:00:db:a8:cf", ip: "192.168.50.70"}
	I0829 19:35:08.682623   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Getting to WaitForSSH function...
	I0829 19:35:08.684466   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.684816   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.684876   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.684957   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Using SSH client type: external
	I0829 19:35:08.684982   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa (-rw-------)
	I0829 19:35:08.685032   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:08.685053   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | About to run SSH command:
	I0829 19:35:08.685069   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | exit 0
	I0829 19:35:08.806174   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:08.806493   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetConfigRaw
	I0829 19:35:08.807134   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:08.809574   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.809900   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.809924   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.810227   79559 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/config.json ...
	I0829 19:35:08.810457   79559 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:08.810478   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:08.810675   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:08.812964   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.813337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.813368   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.813620   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:08.813815   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.813994   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.814161   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:08.814338   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:08.814533   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:08.814544   79559 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:08.914370   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:08.914415   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:08.914742   79559 buildroot.go:166] provisioning hostname "default-k8s-diff-port-672127"
	I0829 19:35:08.914782   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:08.914975   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:08.918471   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.918829   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:08.918857   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:08.919021   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:08.919186   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.919373   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:08.919483   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:08.919664   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:08.919865   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:08.919884   79559 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-672127 && echo "default-k8s-diff-port-672127" | sudo tee /etc/hostname
	I0829 19:35:09.032573   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-672127
	
	I0829 19:35:09.032606   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.035434   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.035811   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.035840   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.035999   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.036182   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.036350   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.036465   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.036651   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.036833   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.036852   79559 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-672127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-672127/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-672127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:09.142908   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:09.142937   79559 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:09.142978   79559 buildroot.go:174] setting up certificates
	I0829 19:35:09.142995   79559 provision.go:84] configureAuth start
	I0829 19:35:09.143010   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetMachineName
	I0829 19:35:09.143258   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:09.145947   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.146313   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.146339   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.146460   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.148631   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.148953   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.148978   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.149128   79559 provision.go:143] copyHostCerts
	I0829 19:35:09.149188   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:09.149204   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:09.149261   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:09.149368   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:09.149378   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:09.149400   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:09.149492   79559 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:09.149501   79559 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:09.149520   79559 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:09.149578   79559 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-672127 san=[127.0.0.1 192.168.50.70 default-k8s-diff-port-672127 localhost minikube]
	I0829 19:35:09.370220   79559 provision.go:177] copyRemoteCerts
	I0829 19:35:09.370277   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:09.370301   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.373233   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.373723   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.373756   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.373966   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.374180   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.374342   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.374496   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.457104   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 19:35:09.481139   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:09.504611   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0829 19:35:09.529597   79559 provision.go:87] duration metric: took 386.586301ms to configureAuth
	I0829 19:35:09.529628   79559 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:09.529887   79559 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:35:09.529989   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.532809   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.533309   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.533342   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.533509   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.533743   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.533965   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.534169   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.534372   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.534523   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.534545   79559 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:09.754724   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:09.754752   79559 machine.go:96] duration metric: took 944.279776ms to provisionDockerMachine
	I0829 19:35:09.754766   79559 start.go:293] postStartSetup for "default-k8s-diff-port-672127" (driver="kvm2")
	I0829 19:35:09.754781   79559 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:09.754807   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.755236   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:09.755270   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.757713   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.758079   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.758125   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.758274   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.758466   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.758682   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.758823   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.841022   79559 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:09.846051   79559 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:09.846081   79559 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:09.846163   79559 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:09.846254   79559 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:09.846379   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:09.857443   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:09.884662   79559 start.go:296] duration metric: took 129.87923ms for postStartSetup
	I0829 19:35:09.884715   79559 fix.go:56] duration metric: took 19.789853711s for fixHost
	I0829 19:35:09.884739   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.888011   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.888562   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.888593   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.888789   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.888976   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.889188   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.889347   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.889533   79559 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:09.889723   79559 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.70 22 <nil> <nil>}
	I0829 19:35:09.889736   79559 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:09.990749   79559 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960109.967111721
	
	I0829 19:35:09.990772   79559 fix.go:216] guest clock: 1724960109.967111721
	I0829 19:35:09.990782   79559 fix.go:229] Guest: 2024-08-29 19:35:09.967111721 +0000 UTC Remote: 2024-08-29 19:35:09.884720437 +0000 UTC m=+231.415600706 (delta=82.391284ms)
	I0829 19:35:09.990835   79559 fix.go:200] guest clock delta is within tolerance: 82.391284ms
	I0829 19:35:09.990846   79559 start.go:83] releasing machines lock for "default-k8s-diff-port-672127", held for 19.896020367s
	I0829 19:35:09.990891   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.991180   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:09.994076   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.994434   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.994459   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.994613   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995121   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995318   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:35:09.995407   79559 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:09.995464   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.995531   79559 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:09.995569   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:35:09.998302   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998673   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.998703   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998732   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:09.998750   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:09.998832   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.998976   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:35:09.999026   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.999109   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:35:09.999162   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.999249   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:35:09.999404   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:09.999395   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:35:10.124503   79559 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:10.130734   79559 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:10.275859   79559 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:10.281662   79559 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:10.281728   79559 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:10.297464   79559 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:10.297488   79559 start.go:495] detecting cgroup driver to use...
	I0829 19:35:10.297553   79559 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:10.316686   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:10.332836   79559 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:10.332880   79559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:10.347021   79559 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:10.364479   79559 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:10.506136   79559 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:10.659246   79559 docker.go:233] disabling docker service ...
	I0829 19:35:10.659324   79559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:10.678953   79559 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:10.694844   79559 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:10.837509   79559 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:10.976512   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:10.993421   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:11.013434   79559 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:35:11.013492   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.023909   79559 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:11.023980   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.038560   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.049911   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.060235   79559 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:11.076772   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.093357   79559 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.110140   79559 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:11.121770   79559 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:11.131641   79559 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:11.131697   79559 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:11.151460   79559 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:11.161320   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:11.286180   79559 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:11.382235   79559 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:11.382312   79559 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:11.388226   79559 start.go:563] Will wait 60s for crictl version
	I0829 19:35:11.388299   79559 ssh_runner.go:195] Run: which crictl
	I0829 19:35:11.391832   79559 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:11.429509   79559 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:11.429601   79559 ssh_runner.go:195] Run: crio --version
	I0829 19:35:11.457180   79559 ssh_runner.go:195] Run: crio --version
	I0829 19:35:11.487106   79559 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:35:11.488483   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetIP
	I0829 19:35:11.491607   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:11.491988   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:35:11.492027   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:35:11.492316   79559 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:11.496448   79559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:11.512045   79559 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-672127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:11.512159   79559 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:35:11.512219   79559 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:11.549212   79559 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:35:11.549287   79559 ssh_runner.go:195] Run: which lz4
	I0829 19:35:11.554151   79559 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:11.558691   79559 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:11.558718   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0829 19:35:12.826290   79559 crio.go:462] duration metric: took 1.272173781s to copy over tarball
	I0829 19:35:12.826387   79559 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:10.017965   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .Start
	I0829 19:35:10.018195   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring networks are active...
	I0829 19:35:10.018992   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network default is active
	I0829 19:35:10.019360   79869 main.go:141] libmachine: (old-k8s-version-467349) Ensuring network mk-old-k8s-version-467349 is active
	I0829 19:35:10.019708   79869 main.go:141] libmachine: (old-k8s-version-467349) Getting domain xml...
	I0829 19:35:10.020408   79869 main.go:141] libmachine: (old-k8s-version-467349) Creating domain...
	I0829 19:35:11.298443   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting to get IP...
	I0829 19:35:11.299521   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.300063   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.300152   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.300048   80714 retry.go:31] will retry after 253.519755ms: waiting for machine to come up
	I0829 19:35:11.555694   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.556242   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.556274   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.556187   80714 retry.go:31] will retry after 375.22671ms: waiting for machine to come up
	I0829 19:35:11.932780   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:11.933206   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:11.933233   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:11.933176   80714 retry.go:31] will retry after 329.139276ms: waiting for machine to come up
	I0829 19:35:12.263804   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.264471   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.264501   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.264437   80714 retry.go:31] will retry after 434.457682ms: waiting for machine to come up
	I0829 19:35:12.701184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:12.701773   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:12.701805   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:12.701691   80714 retry.go:31] will retry after 555.961608ms: waiting for machine to come up
	I0829 19:35:13.259670   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:13.260159   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:13.260184   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:13.260080   80714 retry.go:31] will retry after 814.491179ms: waiting for machine to come up
	I0829 19:35:09.162551   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:11.165654   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:13.662027   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:15.034221   79559 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.207800368s)
	I0829 19:35:15.034254   79559 crio.go:469] duration metric: took 2.207935139s to extract the tarball
	I0829 19:35:15.034263   79559 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:15.070411   79559 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:15.117649   79559 crio.go:514] all images are preloaded for cri-o runtime.
	I0829 19:35:15.117675   79559 cache_images.go:84] Images are preloaded, skipping loading
	I0829 19:35:15.117684   79559 kubeadm.go:934] updating node { 192.168.50.70 8444 v1.31.0 crio true true} ...
	I0829 19:35:15.117793   79559 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-672127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:15.117873   79559 ssh_runner.go:195] Run: crio config
	I0829 19:35:15.161749   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:35:15.161778   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:15.161795   79559 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:15.161815   79559 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.70 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-672127 NodeName:default-k8s-diff-port-672127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:35:15.161949   79559 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.70
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-672127"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:15.162002   79559 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:35:15.171789   79559 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:15.171858   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:15.181011   79559 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0829 19:35:15.197394   79559 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:15.213309   79559 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0829 19:35:15.231088   79559 ssh_runner.go:195] Run: grep 192.168.50.70	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:15.234732   79559 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:15.245700   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:15.368430   79559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:15.385792   79559 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127 for IP: 192.168.50.70
	I0829 19:35:15.385820   79559 certs.go:194] generating shared ca certs ...
	I0829 19:35:15.385844   79559 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:15.386020   79559 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:15.386108   79559 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:15.386123   79559 certs.go:256] generating profile certs ...
	I0829 19:35:15.386240   79559 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/client.key
	I0829 19:35:15.386324   79559 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.key.828c23de
	I0829 19:35:15.386378   79559 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.key
	I0829 19:35:15.386523   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:15.386567   79559 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:15.386582   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:15.386615   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:15.386650   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:15.386680   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:15.386736   79559 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:15.387663   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:15.429474   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:15.470861   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:15.514906   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:15.552474   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0829 19:35:15.581749   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:15.605874   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:15.629703   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/default-k8s-diff-port-672127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0829 19:35:15.653589   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:15.680222   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:15.706824   79559 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:15.733354   79559 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:15.753069   79559 ssh_runner.go:195] Run: openssl version
	I0829 19:35:15.759905   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:15.770507   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.776103   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.776159   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:15.783674   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:15.797519   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:15.809517   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.814243   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.814311   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:15.819834   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:15.830130   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:15.840473   79559 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.844974   79559 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.845033   79559 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:15.850619   79559 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:15.860955   79559 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:15.865359   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:15.871149   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:15.876982   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:15.882635   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:15.888020   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:15.893423   79559 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:15.898989   79559 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-672127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-672127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:15.899085   79559 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:15.899156   79559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:15.939743   79559 cri.go:89] found id: ""
	I0829 19:35:15.939817   79559 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:15.949877   79559 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:15.949896   79559 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:15.949938   79559 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:15.959436   79559 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:15.960417   79559 kubeconfig.go:125] found "default-k8s-diff-port-672127" server: "https://192.168.50.70:8444"
	I0829 19:35:15.962469   79559 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:15.971672   79559 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.70
	I0829 19:35:15.971700   79559 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:15.971710   79559 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:15.971777   79559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:16.015084   79559 cri.go:89] found id: ""
	I0829 19:35:16.015173   79559 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:16.031614   79559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:16.044359   79559 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:16.044384   79559 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:16.044448   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 19:35:16.056073   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:16.056139   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:16.066426   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 19:35:16.075300   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:16.075368   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:16.084795   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 19:35:16.093739   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:16.093804   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:16.103539   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 19:35:16.112676   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:16.112744   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:16.121997   79559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:16.134461   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:16.246853   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.577230   79559 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.330337638s)
	I0829 19:35:17.577271   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.810593   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.892546   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:17.993500   79559 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:17.993595   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:18.494169   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:14.076091   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.076599   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.076622   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.076549   80714 retry.go:31] will retry after 864.469682ms: waiting for machine to come up
	I0829 19:35:14.942675   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:14.943123   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:14.943154   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:14.943068   80714 retry.go:31] will retry after 1.062037578s: waiting for machine to come up
	I0829 19:35:16.006750   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:16.007301   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:16.007336   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:16.007212   80714 retry.go:31] will retry after 1.22747505s: waiting for machine to come up
	I0829 19:35:17.236788   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:17.237262   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:17.237291   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:17.237216   80714 retry.go:31] will retry after 1.663870598s: waiting for machine to come up
	I0829 19:35:15.662198   79073 pod_ready.go:103] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:16.162890   79073 pod_ready.go:93] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:16.162919   79073 pod_ready.go:82] duration metric: took 11.506772145s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:16.162931   79073 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:18.170586   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:18.994574   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:19.493764   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:19.509384   79559 api_server.go:72] duration metric: took 1.515882118s to wait for apiserver process to appear ...
	I0829 19:35:19.509415   79559 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:35:19.509440   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:21.555577   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:21.555625   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:21.555642   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:21.572445   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:35:21.572481   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:35:22.009612   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:22.017592   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:22.017627   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:22.510148   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:22.516104   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:35:22.516140   79559 api_server.go:103] status: https://192.168.50.70:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:35:23.009648   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:35:23.016342   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 200:
	ok
	I0829 19:35:23.022852   79559 api_server.go:141] control plane version: v1.31.0
	I0829 19:35:23.022878   79559 api_server.go:131] duration metric: took 3.513455745s to wait for apiserver health ...
	I0829 19:35:23.022889   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:35:23.022897   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:23.024557   79559 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:35:23.025764   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:35:23.035743   79559 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:35:23.075272   79559 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:35:23.091948   79559 system_pods.go:59] 8 kube-system pods found
	I0829 19:35:23.091991   79559 system_pods.go:61] "coredns-6f6b679f8f-p92hj" [736e7c46-b945-445f-a404-20a609f766e0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:35:23.092004   79559 system_pods.go:61] "etcd-default-k8s-diff-port-672127" [cf016602-46cd-4972-bdd3-1ef5d881b6e0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:35:23.092014   79559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-672127" [eb51ac87-f5e4-4031-84fe-811da2ff8d63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:35:23.092026   79559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-672127" [caf7b777-935f-4351-b58d-60bb8175bec0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:35:23.092034   79559 system_pods.go:61] "kube-proxy-tlc89" [9a11e5a6-b624-494b-8e94-d362b94fb98e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 19:35:23.092043   79559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-672127" [fe83e2af-b046-4d56-9b5c-d7a17db7e854] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:35:23.092053   79559 system_pods.go:61] "metrics-server-6867b74b74-tbkxg" [6d8f8c92-4f89-4a2a-8690-51a850768516] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:35:23.092065   79559 system_pods.go:61] "storage-provisioner" [7349bb79-c402-4587-ab0b-e52e5d455c61] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:35:23.092078   79559 system_pods.go:74] duration metric: took 16.779413ms to wait for pod list to return data ...
	I0829 19:35:23.092091   79559 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:35:23.099492   79559 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:35:23.099533   79559 node_conditions.go:123] node cpu capacity is 2
	I0829 19:35:23.099547   79559 node_conditions.go:105] duration metric: took 7.450351ms to run NodePressure ...
	I0829 19:35:23.099571   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:23.371279   79559 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:35:23.377322   79559 kubeadm.go:739] kubelet initialised
	I0829 19:35:23.377346   79559 kubeadm.go:740] duration metric: took 6.045074ms waiting for restarted kubelet to initialise ...
	I0829 19:35:23.377353   79559 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:35:23.384232   79559 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.391931   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.391960   79559 pod_ready.go:82] duration metric: took 7.702072ms for pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.391971   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "coredns-6f6b679f8f-p92hj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.391980   79559 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.396708   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.396728   79559 pod_ready.go:82] duration metric: took 4.739691ms for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.396736   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.396744   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:23.401274   79559 pod_ready.go:98] node "default-k8s-diff-port-672127" hosting pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.401298   79559 pod_ready.go:82] duration metric: took 4.546455ms for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	E0829 19:35:23.401308   79559 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-672127" hosting pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-672127" has status "Ready":"False"
	I0829 19:35:23.401314   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:18.903082   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:18.903668   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:18.903691   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:18.903624   80714 retry.go:31] will retry after 2.012998698s: waiting for machine to come up
	I0829 19:35:20.918657   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:20.919143   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:20.919179   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:20.919066   80714 retry.go:31] will retry after 2.674645507s: waiting for machine to come up
	I0829 19:35:23.595218   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:23.595658   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | unable to find current IP address of domain old-k8s-version-467349 in network mk-old-k8s-version-467349
	I0829 19:35:23.595685   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | I0829 19:35:23.595633   80714 retry.go:31] will retry after 3.052784769s: waiting for machine to come up
	I0829 19:35:20.670356   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:22.670699   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.786910   78865 start.go:364] duration metric: took 49.670356886s to acquireMachinesLock for "no-preload-690795"
	I0829 19:35:27.786963   78865 start.go:96] Skipping create...Using existing machine configuration
	I0829 19:35:27.786975   78865 fix.go:54] fixHost starting: 
	I0829 19:35:27.787377   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:35:27.787425   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:35:27.803558   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0829 19:35:27.803903   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:35:27.804328   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:35:27.804348   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:35:27.804623   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:35:27.804824   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:27.804967   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:35:27.806332   78865 fix.go:112] recreateIfNeeded on no-preload-690795: state=Stopped err=<nil>
	I0829 19:35:27.806353   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	W0829 19:35:27.806525   78865 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 19:35:27.808678   78865 out.go:177] * Restarting existing kvm2 VM for "no-preload-690795" ...
	I0829 19:35:25.407622   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.910410   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:26.649643   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650117   79869 main.go:141] libmachine: (old-k8s-version-467349) Found IP for machine: 192.168.72.112
	I0829 19:35:26.650146   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserving static IP address...
	I0829 19:35:26.650161   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has current primary IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.650553   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.650579   79869 main.go:141] libmachine: (old-k8s-version-467349) Reserved static IP address: 192.168.72.112
	I0829 19:35:26.650600   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | skip adding static IP to network mk-old-k8s-version-467349 - found existing host DHCP lease matching {name: "old-k8s-version-467349", mac: "52:54:00:1e:26:7c", ip: "192.168.72.112"}
	I0829 19:35:26.650611   79869 main.go:141] libmachine: (old-k8s-version-467349) Waiting for SSH to be available...
	I0829 19:35:26.650640   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Getting to WaitForSSH function...
	I0829 19:35:26.653157   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653509   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.653528   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.653667   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH client type: external
	I0829 19:35:26.653690   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa (-rw-------)
	I0829 19:35:26.653724   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:26.653741   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | About to run SSH command:
	I0829 19:35:26.653755   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | exit 0
	I0829 19:35:26.778126   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:26.778436   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetConfigRaw
	I0829 19:35:26.779002   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:26.781392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.781745   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.781778   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.782006   79869 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/config.json ...
	I0829 19:35:26.782229   79869 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:26.782249   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:26.782509   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.784806   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785130   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.785148   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.785300   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.785462   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785611   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.785799   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.785923   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.786126   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.786138   79869 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:26.886223   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:26.886256   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886522   79869 buildroot.go:166] provisioning hostname "old-k8s-version-467349"
	I0829 19:35:26.886563   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:26.886756   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:26.889874   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890304   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:26.890324   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:26.890471   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:26.890655   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890821   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:26.890969   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:26.891131   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:26.891333   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:26.891348   79869 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467349 && echo "old-k8s-version-467349" | sudo tee /etc/hostname
	I0829 19:35:27.007493   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467349
	
	I0829 19:35:27.007535   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.010202   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010526   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.010548   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.010737   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.010913   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011080   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.011225   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.011395   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.011548   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.011564   79869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467349/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:27.123357   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:27.123385   79869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:27.123436   79869 buildroot.go:174] setting up certificates
	I0829 19:35:27.123445   79869 provision.go:84] configureAuth start
	I0829 19:35:27.123455   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetMachineName
	I0829 19:35:27.123760   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.126486   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.126819   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.126857   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.127013   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.129089   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129404   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.129429   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.129554   79869 provision.go:143] copyHostCerts
	I0829 19:35:27.129614   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:27.129636   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:27.129704   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:27.129825   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:27.129840   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:27.129871   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:27.129946   79869 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:27.129956   79869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:27.129982   79869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:27.130043   79869 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467349 san=[127.0.0.1 192.168.72.112 localhost minikube old-k8s-version-467349]
	I0829 19:35:27.190556   79869 provision.go:177] copyRemoteCerts
	I0829 19:35:27.190610   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:27.190667   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.193785   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194205   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.194243   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.194406   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.194620   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.194788   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.194962   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.276099   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:27.299820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0829 19:35:27.323625   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:27.347943   79869 provision.go:87] duration metric: took 224.487094ms to configureAuth
	I0829 19:35:27.347970   79869 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:27.348140   79869 config.go:182] Loaded profile config "old-k8s-version-467349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:35:27.348203   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.351042   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351392   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.351420   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.351654   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.351860   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352030   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.352159   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.352321   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.352487   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.352504   79869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:27.565849   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:27.565874   79869 machine.go:96] duration metric: took 783.631791ms to provisionDockerMachine
	I0829 19:35:27.565886   79869 start.go:293] postStartSetup for "old-k8s-version-467349" (driver="kvm2")
	I0829 19:35:27.565897   79869 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:27.565935   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.566274   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:27.566332   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.568900   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569225   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.569258   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.569424   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.569613   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.569795   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.569961   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.648057   79869 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:27.651955   79869 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:27.651984   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:27.652057   79869 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:27.652167   79869 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:27.652311   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:27.660961   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:27.684179   79869 start.go:296] duration metric: took 118.281042ms for postStartSetup
	I0829 19:35:27.684251   79869 fix.go:56] duration metric: took 17.69321583s for fixHost
	I0829 19:35:27.684277   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.686877   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687235   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.687266   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.687429   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.687615   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687751   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.687863   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.687994   79869 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:27.688202   79869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0829 19:35:27.688220   79869 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:27.786754   79869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960127.745017542
	
	I0829 19:35:27.786773   79869 fix.go:216] guest clock: 1724960127.745017542
	I0829 19:35:27.786780   79869 fix.go:229] Guest: 2024-08-29 19:35:27.745017542 +0000 UTC Remote: 2024-08-29 19:35:27.684258077 +0000 UTC m=+208.981895804 (delta=60.759465ms)
	I0829 19:35:27.786798   79869 fix.go:200] guest clock delta is within tolerance: 60.759465ms
	I0829 19:35:27.786803   79869 start.go:83] releasing machines lock for "old-k8s-version-467349", held for 17.795804036s
	I0829 19:35:27.786823   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.787066   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:27.789617   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.789937   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.789967   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.790124   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790514   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790689   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .DriverName
	I0829 19:35:27.790781   79869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:27.790827   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.790912   79869 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:27.790937   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHHostname
	I0829 19:35:27.793406   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793495   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793732   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793762   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:27.793781   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793821   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:27.793910   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794075   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794076   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHPort
	I0829 19:35:27.794242   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794419   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.794435   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHKeyPath
	I0829 19:35:27.794646   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetSSHUsername
	I0829 19:35:27.794811   79869 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/old-k8s-version-467349/id_rsa Username:docker}
	I0829 19:35:27.910665   79869 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:27.916917   79869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:28.063525   79869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:28.070848   79869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:28.070907   79869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:28.089204   79869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:28.089226   79869 start.go:495] detecting cgroup driver to use...
	I0829 19:35:28.089291   79869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:28.108528   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:28.122248   79869 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:28.122353   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:28.143014   79869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:28.159322   79869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:28.281356   79869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:28.445101   79869 docker.go:233] disabling docker service ...
	I0829 19:35:28.445162   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:28.460437   79869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:28.474849   79869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:28.609747   79869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:28.734733   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:25.170397   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:27.669465   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:28.748605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:28.766945   79869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0829 19:35:28.767014   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.776535   79869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:28.776598   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.787050   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.797552   79869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:28.807575   79869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:28.818319   79869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:28.827289   79869 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:28.827342   79869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:28.839995   79869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:28.849779   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:28.979701   79869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:29.092264   79869 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:29.092344   79869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:29.097310   79869 start.go:563] Will wait 60s for crictl version
	I0829 19:35:29.097366   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:29.101080   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:29.146142   79869 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:29.146228   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.176037   79869 ssh_runner.go:195] Run: crio --version
	I0829 19:35:29.210024   79869 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0829 19:35:27.810111   78865 main.go:141] libmachine: (no-preload-690795) Calling .Start
	I0829 19:35:27.810300   78865 main.go:141] libmachine: (no-preload-690795) Ensuring networks are active...
	I0829 19:35:27.811063   78865 main.go:141] libmachine: (no-preload-690795) Ensuring network default is active
	I0829 19:35:27.811464   78865 main.go:141] libmachine: (no-preload-690795) Ensuring network mk-no-preload-690795 is active
	I0829 19:35:27.811848   78865 main.go:141] libmachine: (no-preload-690795) Getting domain xml...
	I0829 19:35:27.812590   78865 main.go:141] libmachine: (no-preload-690795) Creating domain...
	I0829 19:35:29.131821   78865 main.go:141] libmachine: (no-preload-690795) Waiting to get IP...
	I0829 19:35:29.132876   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.133519   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.133595   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.133481   80876 retry.go:31] will retry after 252.123266ms: waiting for machine to come up
	I0829 19:35:29.387046   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.387534   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.387561   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.387496   80876 retry.go:31] will retry after 304.157394ms: waiting for machine to come up
	I0829 19:35:29.693891   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:29.694581   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:29.694603   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:29.694560   80876 retry.go:31] will retry after 366.980614ms: waiting for machine to come up
	I0829 19:35:30.063032   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:30.063466   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:30.063504   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:30.063431   80876 retry.go:31] will retry after 562.46082ms: waiting for machine to come up
	I0829 19:35:30.412868   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:32.908366   79559 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:33.408823   79559 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.408848   79559 pod_ready.go:82] duration metric: took 10.007525744s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.408862   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tlc89" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.418176   79559 pod_ready.go:93] pod "kube-proxy-tlc89" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.418202   79559 pod_ready.go:82] duration metric: took 9.33136ms for pod "kube-proxy-tlc89" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.418214   79559 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.424362   79559 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:35:33.424388   79559 pod_ready.go:82] duration metric: took 6.165646ms for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:33.424401   79559 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" ...
	I0829 19:35:29.211072   79869 main.go:141] libmachine: (old-k8s-version-467349) Calling .GetIP
	I0829 19:35:29.214489   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.214897   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:26:7c", ip: ""} in network mk-old-k8s-version-467349: {Iface:virbr4 ExpiryTime:2024-08-29 20:35:21 +0000 UTC Type:0 Mac:52:54:00:1e:26:7c Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:old-k8s-version-467349 Clientid:01:52:54:00:1e:26:7c}
	I0829 19:35:29.214932   79869 main.go:141] libmachine: (old-k8s-version-467349) DBG | domain old-k8s-version-467349 has defined IP address 192.168.72.112 and MAC address 52:54:00:1e:26:7c in network mk-old-k8s-version-467349
	I0829 19:35:29.215196   79869 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:29.219742   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:29.233815   79869 kubeadm.go:883] updating cluster {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:29.233934   79869 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 19:35:29.233994   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:29.281512   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:29.281579   79869 ssh_runner.go:195] Run: which lz4
	I0829 19:35:29.285825   79869 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 19:35:29.290303   79869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 19:35:29.290349   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0829 19:35:30.843642   79869 crio.go:462] duration metric: took 1.557868582s to copy over tarball
	I0829 19:35:30.843714   79869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 19:35:29.670803   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:32.171154   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:30.627531   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:30.628123   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:30.628147   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:30.628030   80876 retry.go:31] will retry after 488.97189ms: waiting for machine to come up
	I0829 19:35:31.118901   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:31.119457   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:31.119480   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:31.119398   80876 retry.go:31] will retry after 801.189699ms: waiting for machine to come up
	I0829 19:35:31.921939   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:31.922447   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:31.922482   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:31.922391   80876 retry.go:31] will retry after 828.788864ms: waiting for machine to come up
	I0829 19:35:32.752986   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:32.753429   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:32.753465   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:32.753385   80876 retry.go:31] will retry after 1.404436811s: waiting for machine to come up
	I0829 19:35:34.159129   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:34.159714   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:34.159741   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:34.159678   80876 retry.go:31] will retry after 1.312099391s: waiting for machine to come up
	I0829 19:35:35.473045   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:35.473510   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:35.473549   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:35.473461   80876 retry.go:31] will retry after 1.46129368s: waiting for machine to come up
	I0829 19:35:35.431524   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:37.437993   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:33.827965   79869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984226389s)
	I0829 19:35:33.827993   79869 crio.go:469] duration metric: took 2.98432047s to extract the tarball
	I0829 19:35:33.828004   79869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 19:35:33.869606   79869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:33.902753   79869 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0829 19:35:33.902782   79869 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:33.902862   79869 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.902867   79869 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.902869   79869 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.902882   79869 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:33.903054   79869 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.903000   79869 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0829 19:35:33.902955   79869 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.902978   79869 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:33.904938   79869 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:33.904960   79869 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:33.904913   79869 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0829 19:35:33.904917   79869 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:33.904920   79869 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.159604   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0829 19:35:34.195935   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.208324   79869 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0829 19:35:34.208373   79869 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0829 19:35:34.208414   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.229776   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.231728   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.241303   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.243523   79869 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0829 19:35:34.243572   79869 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.243589   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.243612   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.256377   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.291584   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.339295   79869 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0829 19:35:34.339344   79869 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.339396   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364510   79869 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0829 19:35:34.364559   79869 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.364565   79869 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0829 19:35:34.364598   79869 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.364608   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.364636   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.364641   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.364642   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.370545   79869 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0829 19:35:34.370580   79869 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.370621   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.401578   79869 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0829 19:35:34.401628   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.401634   79869 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.401651   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.401669   79869 ssh_runner.go:195] Run: which crictl
	I0829 19:35:34.452408   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.452472   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0829 19:35:34.452530   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.452479   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.498680   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.502698   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.502722   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.608235   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.608332   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0829 19:35:34.608345   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.608302   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0829 19:35:34.647702   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0829 19:35:34.647744   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.647784   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0829 19:35:34.771634   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0829 19:35:34.771691   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0829 19:35:34.771642   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0829 19:35:34.771742   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0829 19:35:34.771818   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0829 19:35:34.790517   79869 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0829 19:35:34.826666   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0829 19:35:34.832449   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0829 19:35:34.850172   79869 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0829 19:35:35.112084   79869 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:35.251873   79869 cache_images.go:92] duration metric: took 1.34907399s to LoadCachedImages
	W0829 19:35:35.251967   79869 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0829 19:35:35.251984   79869 kubeadm.go:934] updating node { 192.168.72.112 8443 v1.20.0 crio true true} ...
	I0829 19:35:35.252130   79869 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-467349 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:35:35.252215   79869 ssh_runner.go:195] Run: crio config
	I0829 19:35:35.307174   79869 cni.go:84] Creating CNI manager for ""
	I0829 19:35:35.307205   79869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:35:35.307229   79869 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:35:35.307253   79869 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467349 NodeName:old-k8s-version-467349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0829 19:35:35.307421   79869 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-467349"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:35:35.307498   79869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0829 19:35:35.317493   79869 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:35:35.317574   79869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:35:35.327102   79869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0829 19:35:35.343936   79869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:35:35.362420   79869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0829 19:35:35.379862   79869 ssh_runner.go:195] Run: grep 192.168.72.112	control-plane.minikube.internal$ /etc/hosts
	I0829 19:35:35.383595   79869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:35.396175   79869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:35.513069   79869 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:35:35.535454   79869 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349 for IP: 192.168.72.112
	I0829 19:35:35.535481   79869 certs.go:194] generating shared ca certs ...
	I0829 19:35:35.535500   79869 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:35.535693   79869 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:35:35.535751   79869 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:35:35.535764   79869 certs.go:256] generating profile certs ...
	I0829 19:35:35.535885   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/client.key
	I0829 19:35:35.535962   79869 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key.b97fdb0f
	I0829 19:35:35.536010   79869 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key
	I0829 19:35:35.536160   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:35:35.536198   79869 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:35:35.536212   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:35:35.536255   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:35:35.536289   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:35:35.536345   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:35:35.536403   79869 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:35.537270   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:35:35.573137   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:35:35.605232   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:35:35.633800   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:35:35.681773   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 19:35:35.711207   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:35:35.748040   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:35:35.774144   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/old-k8s-version-467349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:35:35.805029   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:35:35.833761   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:35:35.856820   79869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:35:35.883402   79869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:35:35.902258   79869 ssh_runner.go:195] Run: openssl version
	I0829 19:35:35.908223   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:35:35.919106   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923368   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.923414   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:35:35.930431   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:35:35.941856   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:35:35.953186   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957279   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.957351   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:35:35.963886   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:35:35.976058   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:35:35.986836   79869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991417   79869 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.991482   79869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:35:35.997160   79869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:35:36.009731   79869 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:35:36.015343   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:35:36.022897   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:35:36.028976   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:35:36.036658   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:35:36.042513   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:35:36.048085   79869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:35:36.053863   79869 kubeadm.go:392] StartCluster: {Name:old-k8s-version-467349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-467349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:35:36.053944   79869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:35:36.053999   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.099158   79869 cri.go:89] found id: ""
	I0829 19:35:36.099230   79869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:35:36.109678   79869 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:35:36.109701   79869 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:35:36.109751   79869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:35:36.119674   79869 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:35:36.120829   79869 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-467349" does not appear in /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:35:36.121495   79869 kubeconfig.go:62] /home/jenkins/minikube-integration/19531-13056/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-467349" cluster setting kubeconfig missing "old-k8s-version-467349" context setting]
	I0829 19:35:36.122505   79869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:35:36.221053   79869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:35:36.232505   79869 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.112
	I0829 19:35:36.232550   79869 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:35:36.232562   79869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:35:36.232612   79869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:35:36.272228   79869 cri.go:89] found id: ""
	I0829 19:35:36.272290   79869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:35:36.290945   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:35:36.301665   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:35:36.301688   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:35:36.301740   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:35:36.311828   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:35:36.311882   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:35:36.322539   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:35:36.331879   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:35:36.331947   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:35:36.343057   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.352806   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:35:36.352867   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:35:36.362158   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:35:36.372280   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:35:36.372355   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:35:36.383178   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:35:36.393699   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:36.514064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.332360   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.570906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.665203   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:35:37.764043   79869 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:35:37.764146   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:38.264990   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:34.172082   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:36.669124   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:38.669696   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:36.936034   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:36.936510   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:36.936539   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:36.936463   80876 retry.go:31] will retry after 1.943807762s: waiting for machine to come up
	I0829 19:35:38.881644   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:38.882110   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:38.882133   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:38.882067   80876 retry.go:31] will retry after 3.173912619s: waiting for machine to come up
	I0829 19:35:39.932725   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:42.429439   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:38.764741   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.264314   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:39.765085   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.264910   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:40.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.264207   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.764841   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.265060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:42.764958   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:43.264971   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:41.168816   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:43.669594   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:42.059140   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:42.059668   78865 main.go:141] libmachine: (no-preload-690795) DBG | unable to find current IP address of domain no-preload-690795 in network mk-no-preload-690795
	I0829 19:35:42.059692   78865 main.go:141] libmachine: (no-preload-690795) DBG | I0829 19:35:42.059602   80876 retry.go:31] will retry after 4.193427915s: waiting for machine to come up
	I0829 19:35:44.430473   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:46.431149   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:43.764674   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.264893   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:44.764345   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.264234   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.764985   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.265107   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:46.764222   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.264350   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:47.764787   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:48.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:45.671012   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:48.168836   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:46.256270   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.256783   78865 main.go:141] libmachine: (no-preload-690795) Found IP for machine: 192.168.39.76
	I0829 19:35:46.256806   78865 main.go:141] libmachine: (no-preload-690795) Reserving static IP address...
	I0829 19:35:46.256822   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has current primary IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.257249   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "no-preload-690795", mac: "52:54:00:2b:48:ed", ip: "192.168.39.76"} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.257274   78865 main.go:141] libmachine: (no-preload-690795) Reserved static IP address: 192.168.39.76
	I0829 19:35:46.257289   78865 main.go:141] libmachine: (no-preload-690795) DBG | skip adding static IP to network mk-no-preload-690795 - found existing host DHCP lease matching {name: "no-preload-690795", mac: "52:54:00:2b:48:ed", ip: "192.168.39.76"}
	I0829 19:35:46.257299   78865 main.go:141] libmachine: (no-preload-690795) Waiting for SSH to be available...
	I0829 19:35:46.257313   78865 main.go:141] libmachine: (no-preload-690795) DBG | Getting to WaitForSSH function...
	I0829 19:35:46.259334   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.259664   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.259692   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.259788   78865 main.go:141] libmachine: (no-preload-690795) DBG | Using SSH client type: external
	I0829 19:35:46.259821   78865 main.go:141] libmachine: (no-preload-690795) DBG | Using SSH private key: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa (-rw-------)
	I0829 19:35:46.259859   78865 main.go:141] libmachine: (no-preload-690795) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0829 19:35:46.259871   78865 main.go:141] libmachine: (no-preload-690795) DBG | About to run SSH command:
	I0829 19:35:46.259902   78865 main.go:141] libmachine: (no-preload-690795) DBG | exit 0
	I0829 19:35:46.389869   78865 main.go:141] libmachine: (no-preload-690795) DBG | SSH cmd err, output: <nil>: 
	I0829 19:35:46.390295   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetConfigRaw
	I0829 19:35:46.390987   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:46.393890   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.394310   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.394342   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.394673   78865 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/config.json ...
	I0829 19:35:46.394846   78865 machine.go:93] provisionDockerMachine start ...
	I0829 19:35:46.394869   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:46.395082   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.397203   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.397508   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.397535   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.397676   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.397862   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.398011   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.398178   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.398314   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.398475   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.398486   78865 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 19:35:46.502132   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 19:35:46.502163   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.502426   78865 buildroot.go:166] provisioning hostname "no-preload-690795"
	I0829 19:35:46.502449   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.502642   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.505084   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.505414   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.505443   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.505665   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.505861   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.506035   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.506219   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.506379   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.506573   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.506597   78865 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-690795 && echo "no-preload-690795" | sudo tee /etc/hostname
	I0829 19:35:46.627246   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-690795
	
	I0829 19:35:46.627269   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.630081   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.630430   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.630454   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.630611   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.630780   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.630947   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.631233   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.631397   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:46.631545   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:46.631568   78865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-690795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-690795/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-690795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 19:35:46.746055   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 19:35:46.746106   78865 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19531-13056/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-13056/.minikube}
	I0829 19:35:46.746131   78865 buildroot.go:174] setting up certificates
	I0829 19:35:46.746143   78865 provision.go:84] configureAuth start
	I0829 19:35:46.746160   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetMachineName
	I0829 19:35:46.746411   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:46.749125   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.749476   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.749497   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.749642   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.751828   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.752178   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.752203   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.752317   78865 provision.go:143] copyHostCerts
	I0829 19:35:46.752384   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem, removing ...
	I0829 19:35:46.752404   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem
	I0829 19:35:46.752475   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/key.pem (1675 bytes)
	I0829 19:35:46.752580   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem, removing ...
	I0829 19:35:46.752591   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem
	I0829 19:35:46.752619   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/ca.pem (1082 bytes)
	I0829 19:35:46.752693   78865 exec_runner.go:144] found /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem, removing ...
	I0829 19:35:46.752703   78865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem
	I0829 19:35:46.752728   78865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-13056/.minikube/cert.pem (1123 bytes)
	I0829 19:35:46.752791   78865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem org=jenkins.no-preload-690795 san=[127.0.0.1 192.168.39.76 localhost minikube no-preload-690795]
	I0829 19:35:46.901689   78865 provision.go:177] copyRemoteCerts
	I0829 19:35:46.901744   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 19:35:46.901764   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:46.904873   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.905241   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:46.905287   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:46.905458   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:46.905657   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:46.905805   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:46.905960   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:46.988181   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 19:35:47.011149   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 19:35:47.034849   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 19:35:47.057375   78865 provision.go:87] duration metric: took 311.217634ms to configureAuth
	I0829 19:35:47.057402   78865 buildroot.go:189] setting minikube options for container-runtime
	I0829 19:35:47.057599   78865 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:35:47.057695   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.060274   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.060594   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.060620   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.060750   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.060976   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.061149   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.061311   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.061465   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:47.061676   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:47.061703   78865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0829 19:35:47.284836   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0829 19:35:47.284862   78865 machine.go:96] duration metric: took 890.004565ms to provisionDockerMachine
	I0829 19:35:47.284876   78865 start.go:293] postStartSetup for "no-preload-690795" (driver="kvm2")
	I0829 19:35:47.284889   78865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 19:35:47.284909   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.285207   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 19:35:47.285232   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.287875   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.288162   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.288180   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.288391   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.288597   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.288772   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.288899   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.372833   78865 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 19:35:47.376649   78865 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 19:35:47.376670   78865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/addons for local assets ...
	I0829 19:35:47.376729   78865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-13056/.minikube/files for local assets ...
	I0829 19:35:47.376801   78865 filesync.go:149] local asset: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem -> 202592.pem in /etc/ssl/certs
	I0829 19:35:47.376881   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 19:35:47.385721   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:35:47.407601   78865 start.go:296] duration metric: took 122.711153ms for postStartSetup
	I0829 19:35:47.407640   78865 fix.go:56] duration metric: took 19.620666095s for fixHost
	I0829 19:35:47.407673   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.410483   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.410873   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.410903   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.411139   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.411363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.411527   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.411674   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.411830   78865 main.go:141] libmachine: Using SSH client type: native
	I0829 19:35:47.411987   78865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0829 19:35:47.412001   78865 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 19:35:47.518841   78865 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724960147.499237123
	
	I0829 19:35:47.518864   78865 fix.go:216] guest clock: 1724960147.499237123
	I0829 19:35:47.518872   78865 fix.go:229] Guest: 2024-08-29 19:35:47.499237123 +0000 UTC Remote: 2024-08-29 19:35:47.407643858 +0000 UTC m=+351.882891548 (delta=91.593265ms)
	I0829 19:35:47.518891   78865 fix.go:200] guest clock delta is within tolerance: 91.593265ms
	I0829 19:35:47.518896   78865 start.go:83] releasing machines lock for "no-preload-690795", held for 19.731957743s
	I0829 19:35:47.518914   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.519214   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:47.521738   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.522125   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.522153   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.522310   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.522806   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.523016   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:35:47.523082   78865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 19:35:47.523127   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.523209   78865 ssh_runner.go:195] Run: cat /version.json
	I0829 19:35:47.523225   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:35:47.526076   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526443   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.526462   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526489   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.526681   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.526826   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.527005   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.527036   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:47.527073   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:47.527199   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:35:47.527197   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.527370   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:35:47.527537   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:35:47.527690   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:35:47.635450   78865 ssh_runner.go:195] Run: systemctl --version
	I0829 19:35:47.641274   78865 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0829 19:35:47.788805   78865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 19:35:47.794545   78865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 19:35:47.794601   78865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 19:35:47.810156   78865 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 19:35:47.810175   78865 start.go:495] detecting cgroup driver to use...
	I0829 19:35:47.810228   78865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 19:35:47.825795   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 19:35:47.839011   78865 docker.go:217] disabling cri-docker service (if available) ...
	I0829 19:35:47.839061   78865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0829 19:35:47.851854   78865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0829 19:35:47.864467   78865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0829 19:35:47.999155   78865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0829 19:35:48.143858   78865 docker.go:233] disabling docker service ...
	I0829 19:35:48.143921   78865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0829 19:35:48.157740   78865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0829 19:35:48.172067   78865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0829 19:35:48.339557   78865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0829 19:35:48.462950   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0829 19:35:48.475646   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 19:35:48.492262   78865 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0829 19:35:48.492329   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.501580   78865 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0829 19:35:48.501647   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.511241   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.520477   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.530413   78865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 19:35:48.540457   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.551258   78865 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.567365   78865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0829 19:35:48.577266   78865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 19:35:48.586423   78865 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0829 19:35:48.586479   78865 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0829 19:35:48.599527   78865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 19:35:48.608666   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:35:48.721808   78865 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0829 19:35:48.811417   78865 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0829 19:35:48.811495   78865 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0829 19:35:48.816689   78865 start.go:563] Will wait 60s for crictl version
	I0829 19:35:48.816750   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:48.820563   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 19:35:48.862786   78865 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0829 19:35:48.862869   78865 ssh_runner.go:195] Run: crio --version
	I0829 19:35:48.889834   78865 ssh_runner.go:195] Run: crio --version
	I0829 19:35:48.918515   78865 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0829 19:35:48.919643   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetIP
	I0829 19:35:48.922182   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:48.922530   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:35:48.922560   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:35:48.922725   78865 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0829 19:35:48.926877   78865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:35:48.939254   78865 kubeadm.go:883] updating cluster {Name:no-preload-690795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 19:35:48.939379   78865 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 19:35:48.939413   78865 ssh_runner.go:195] Run: sudo crictl images --output json
	I0829 19:35:48.972281   78865 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0829 19:35:48.972304   78865 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 19:35:48.972345   78865 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.972361   78865 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:48.972384   78865 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:48.972425   78865 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:48.972443   78865 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:48.972452   78865 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 19:35:48.972496   78865 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:48.972558   78865 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:48.973929   78865 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:48.973938   78865 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.973979   78865 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 19:35:48.973933   78865 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:48.973931   78865 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:48.973932   78865 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:48.973938   78865 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:48.973939   78865 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.229315   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0829 19:35:49.232334   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.271261   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.328903   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.339435   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.349057   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.356840   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.387705   78865 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0829 19:35:49.387748   78865 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0829 19:35:49.387760   78865 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.387777   78865 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.387808   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.387829   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.389731   78865 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0829 19:35:49.389769   78865 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.389809   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.438231   78865 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0829 19:35:49.438264   78865 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.438304   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.453177   78865 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0829 19:35:49.453220   78865 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.453270   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.455713   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.455767   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.455802   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.455804   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.455772   78865 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0829 19:35:49.455895   78865 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.455921   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:49.458141   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.539090   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.539125   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.568605   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.573575   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.573622   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.573575   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.678619   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.680581   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0829 19:35:49.680584   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0829 19:35:49.680671   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0829 19:35:49.699638   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0829 19:35:49.706556   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0829 19:35:49.803909   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 19:35:49.809759   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 19:35:49.809863   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.810356   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0829 19:35:49.810423   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:49.811234   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 19:35:49.811285   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:49.832040   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 19:35:49.832102   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 19:35:49.832153   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:49.832162   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:49.862517   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0829 19:35:49.862537   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.862578   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0829 19:35:49.862653   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0829 19:35:49.862696   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0829 19:35:49.862703   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 19:35:49.862731   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0829 19:35:49.862760   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0829 19:35:49.862788   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:35:50.192890   78865 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:48.930928   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:50.931805   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:53.430716   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:48.764746   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.264755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:49.764703   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.264240   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.764284   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.265111   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:51.764316   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.264213   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:52.764295   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:53.264451   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:50.168967   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:52.169327   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:51.820978   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.958376621s)
	I0829 19:35:51.821014   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0829 19:35:51.821035   78865 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:51.821077   78865 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.958265625s)
	I0829 19:35:51.821109   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0829 19:35:51.821108   78865 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.62819044s)
	I0829 19:35:51.821211   78865 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0829 19:35:51.821243   78865 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:51.821275   78865 ssh_runner.go:195] Run: which crictl
	I0829 19:35:51.821111   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0829 19:35:55.931182   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:58.431477   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:53.764946   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.265076   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.764273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.264844   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:55.764622   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:56.765120   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.265199   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:57.764610   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:58.264296   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:54.669752   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:56.670764   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:55.594240   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.773093303s)
	I0829 19:35:55.594275   78865 ssh_runner.go:235] Completed: which crictl: (3.77298113s)
	I0829 19:35:55.594290   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0829 19:35:55.594340   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:55.594348   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:55.594403   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0829 19:35:57.972145   78865 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.377784997s)
	I0829 19:35:57.972180   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.377757134s)
	I0829 19:35:57.972210   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0829 19:35:57.972223   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:57.972237   78865 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:57.972270   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0829 19:35:58.025853   78865 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:35:59.843856   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.871560481s)
	I0829 19:35:59.843883   78865 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.818003416s)
	I0829 19:35:59.843887   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0829 19:35:59.843915   78865 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 19:35:59.843925   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:59.844004   78865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:35:59.844019   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0829 19:35:59.849625   78865 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0829 19:36:00.432638   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.078312   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:35:58.765060   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.265033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.765033   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.265144   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:00.764425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:01.764672   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.264962   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:02.764603   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:03.264407   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:35:59.170365   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:01.668465   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.670347   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:01.294196   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.450154791s)
	I0829 19:36:01.294230   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0829 19:36:01.294273   78865 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:36:01.294336   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0829 19:36:03.144937   78865 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.850574318s)
	I0829 19:36:03.144978   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0829 19:36:03.145018   78865 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:36:03.145081   78865 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0829 19:36:03.803763   78865 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19531-13056/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0829 19:36:03.803802   78865 cache_images.go:123] Successfully loaded all cached images
	I0829 19:36:03.803807   78865 cache_images.go:92] duration metric: took 14.831492974s to LoadCachedImages
	I0829 19:36:03.803818   78865 kubeadm.go:934] updating node { 192.168.39.76 8443 v1.31.0 crio true true} ...
	I0829 19:36:03.803927   78865 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-690795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 19:36:03.803988   78865 ssh_runner.go:195] Run: crio config
	I0829 19:36:03.854859   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:36:03.854879   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:36:03.854894   78865 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 19:36:03.854915   78865 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-690795 NodeName:no-preload-690795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 19:36:03.855055   78865 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-690795"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.76
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 19:36:03.855114   78865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 19:36:03.865163   78865 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 19:36:03.865236   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 19:36:03.874348   78865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0829 19:36:03.891540   78865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 19:36:03.908488   78865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0829 19:36:03.926440   78865 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0829 19:36:03.930270   78865 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 19:36:03.942353   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:36:04.066646   78865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:36:04.083872   78865 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795 for IP: 192.168.39.76
	I0829 19:36:04.083901   78865 certs.go:194] generating shared ca certs ...
	I0829 19:36:04.083921   78865 certs.go:226] acquiring lock for ca certs: {Name:mka8f241b30678b2c2cdb76b83a45ea5ea9026f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:36:04.084106   78865 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key
	I0829 19:36:04.084172   78865 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key
	I0829 19:36:04.084186   78865 certs.go:256] generating profile certs ...
	I0829 19:36:04.084307   78865 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/client.key
	I0829 19:36:04.084432   78865 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.key.8a2db174
	I0829 19:36:04.084492   78865 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.key
	I0829 19:36:04.084656   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem (1338 bytes)
	W0829 19:36:04.084705   78865 certs.go:480] ignoring /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259_empty.pem, impossibly tiny 0 bytes
	I0829 19:36:04.084718   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 19:36:04.084753   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/ca.pem (1082 bytes)
	I0829 19:36:04.084790   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/cert.pem (1123 bytes)
	I0829 19:36:04.084827   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/certs/key.pem (1675 bytes)
	I0829 19:36:04.084883   78865 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem (1708 bytes)
	I0829 19:36:04.085744   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 19:36:04.124689   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 19:36:04.158769   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 19:36:04.188748   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 19:36:04.217577   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 19:36:04.251166   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 19:36:04.282961   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 19:36:04.306431   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/no-preload-690795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 19:36:04.329260   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/ssl/certs/202592.pem --> /usr/share/ca-certificates/202592.pem (1708 bytes)
	I0829 19:36:04.365050   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 19:36:04.393054   78865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-13056/.minikube/certs/20259.pem --> /usr/share/ca-certificates/20259.pem (1338 bytes)
	I0829 19:36:04.417384   78865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 19:36:04.434555   78865 ssh_runner.go:195] Run: openssl version
	I0829 19:36:04.440074   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202592.pem && ln -fs /usr/share/ca-certificates/202592.pem /etc/ssl/certs/202592.pem"
	I0829 19:36:04.451378   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.455603   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:22 /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.455655   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202592.pem
	I0829 19:36:04.461114   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202592.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 19:36:04.472522   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 19:36:04.483064   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.487316   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.487383   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 19:36:04.492860   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 19:36:04.504284   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20259.pem && ln -fs /usr/share/ca-certificates/20259.pem /etc/ssl/certs/20259.pem"
	I0829 19:36:04.515522   78865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.519853   78865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:22 /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.519908   78865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20259.pem
	I0829 19:36:04.525240   78865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20259.pem /etc/ssl/certs/51391683.0"
	I0829 19:36:04.536612   78865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 19:36:04.540905   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 19:36:04.546622   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 19:36:04.552303   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 19:36:04.558306   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 19:36:04.564129   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 19:36:04.569635   78865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 19:36:04.575196   78865 kubeadm.go:392] StartCluster: {Name:no-preload-690795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-690795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:36:04.575279   78865 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0829 19:36:04.575360   78865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:36:04.619563   78865 cri.go:89] found id: ""
	I0829 19:36:04.619638   78865 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 19:36:04.629655   78865 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 19:36:04.629675   78865 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 19:36:04.629785   78865 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 19:36:04.638771   78865 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 19:36:04.639763   78865 kubeconfig.go:125] found "no-preload-690795" server: "https://192.168.39.76:8443"
	I0829 19:36:04.641783   78865 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 19:36:04.650605   78865 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.76
	I0829 19:36:04.650634   78865 kubeadm.go:1160] stopping kube-system containers ...
	I0829 19:36:04.650644   78865 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0829 19:36:04.650693   78865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0829 19:36:04.685589   78865 cri.go:89] found id: ""
	I0829 19:36:04.685656   78865 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 19:36:04.702584   78865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:36:04.711693   78865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:36:04.711712   78865 kubeadm.go:157] found existing configuration files:
	
	I0829 19:36:04.711753   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:36:04.720291   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:36:04.720349   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:36:04.729301   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:36:04.739449   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:36:04.739513   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:36:04.748786   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:36:04.757128   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:36:04.757175   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:36:04.767533   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:36:04.777322   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:36:04.777373   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:36:04.786269   78865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:36:04.795387   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:04.904530   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:05.430803   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:07.431525   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:03.764403   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.265178   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:04.764546   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.265205   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:05.764700   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.264837   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.764871   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.264506   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.765230   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:08.265050   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.169466   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:08.669719   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:05.750216   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:05.949551   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:06.043930   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:06.140396   78865 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:36:06.140505   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:06.641069   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.141458   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:07.161360   78865 api_server.go:72] duration metric: took 1.020963124s to wait for apiserver process to appear ...
	I0829 19:36:07.161390   78865 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:36:07.161426   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.327675   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 19:36:10.327707   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 19:36:10.327721   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.396704   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:10.396737   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:10.661699   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:10.666518   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:10.666544   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:11.162227   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:11.167736   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:11.167774   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:11.662428   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:11.668688   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 19:36:11.668727   78865 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 19:36:12.162372   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:36:12.168297   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0829 19:36:12.175933   78865 api_server.go:141] control plane version: v1.31.0
	I0829 19:36:12.175956   78865 api_server.go:131] duration metric: took 5.014557664s to wait for apiserver health ...
	I0829 19:36:12.175967   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:36:12.175975   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:36:12.177903   78865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:36:09.930962   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:11.932180   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:08.764431   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.264876   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:09.764481   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.265100   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.764720   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.264283   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:11.764890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.264425   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:12.764965   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:13.264557   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:10.669915   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:13.169150   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:12.179056   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:36:12.202639   78865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:36:12.221804   78865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:36:12.242859   78865 system_pods.go:59] 8 kube-system pods found
	I0829 19:36:12.242897   78865 system_pods.go:61] "coredns-6f6b679f8f-j8zzh" [01eaffa5-a976-441c-987c-bdf3b7f72cd6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:36:12.242905   78865 system_pods.go:61] "etcd-no-preload-690795" [df54ae59-44ff-4f7b-b6c0-6145bdae3e44] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 19:36:12.242912   78865 system_pods.go:61] "kube-apiserver-no-preload-690795" [aee247f2-1381-4571-a671-2cf140c78196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0829 19:36:12.242919   78865 system_pods.go:61] "kube-controller-manager-no-preload-690795" [69244a85-2778-46c8-a95c-d0f8a264c0cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 19:36:12.242923   78865 system_pods.go:61] "kube-proxy-q4mbt" [985478f9-235d-4922-a7fd-a0cbdddf3f68] Running
	I0829 19:36:12.242934   78865 system_pods.go:61] "kube-scheduler-no-preload-690795" [e1e141ab-eb79-4c87-bccd-274f1e7495b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 19:36:12.242940   78865 system_pods.go:61] "metrics-server-6867b74b74-svnwn" [e096a3dc-1166-4ee3-9f3f-e044064a5a13] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:36:12.242945   78865 system_pods.go:61] "storage-provisioner" [6fc868fa-2221-45ad-903e-cd3d2297a3e6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 19:36:12.242952   78865 system_pods.go:74] duration metric: took 21.125083ms to wait for pod list to return data ...
	I0829 19:36:12.242962   78865 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:36:12.253567   78865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:36:12.253598   78865 node_conditions.go:123] node cpu capacity is 2
	I0829 19:36:12.253612   78865 node_conditions.go:105] duration metric: took 10.645029ms to run NodePressure ...
	I0829 19:36:12.253634   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 19:36:12.514683   78865 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 19:36:12.520060   78865 kubeadm.go:739] kubelet initialised
	I0829 19:36:12.520082   78865 kubeadm.go:740] duration metric: took 5.371928ms waiting for restarted kubelet to initialise ...
	I0829 19:36:12.520088   78865 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:36:12.524795   78865 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:14.533484   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:14.430676   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:16.930723   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:13.765038   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.264547   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:14.764878   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.264485   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.765114   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.264694   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:16.764599   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.264540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:17.764523   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:18.264855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:15.668846   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:17.669308   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:17.031326   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:19.530568   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:19.430550   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:21.431080   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:23.431736   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:18.764781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.264280   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:19.764653   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.264908   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.764855   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.265180   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:21.764470   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.264751   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:22.765034   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:23.264498   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:20.168590   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:22.168898   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:21.531983   78865 pod_ready.go:103] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:22.032162   78865 pod_ready.go:93] pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:22.032187   78865 pod_ready.go:82] duration metric: took 9.507358099s for pod "coredns-6f6b679f8f-j8zzh" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:22.032200   78865 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.038935   78865 pod_ready.go:93] pod "etcd-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.038956   78865 pod_ready.go:82] duration metric: took 1.006750868s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.038966   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.043258   78865 pod_ready.go:93] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.043278   78865 pod_ready.go:82] duration metric: took 4.305789ms for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.043298   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.049140   78865 pod_ready.go:93] pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.049159   78865 pod_ready.go:82] duration metric: took 5.852855ms for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.049170   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-q4mbt" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.055033   78865 pod_ready.go:93] pod "kube-proxy-q4mbt" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.055054   78865 pod_ready.go:82] duration metric: took 5.87681ms for pod "kube-proxy-q4mbt" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.055067   78865 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.229706   78865 pod_ready.go:93] pod "kube-scheduler-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:36:23.229734   78865 pod_ready.go:82] duration metric: took 174.6598ms for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:23.229748   78865 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" ...
	I0829 19:36:25.237141   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:25.930818   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.430312   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:23.764384   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.265090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.765183   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.264966   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:25.764429   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.264774   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:26.765090   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.264524   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:27.764810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:28.264541   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:24.169024   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:26.169599   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.668840   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:27.736899   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:30.235632   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:30.430611   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.930362   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:28.764771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.264563   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:29.764735   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.265228   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.764328   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.264312   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:31.764627   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.264891   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:32.765104   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:33.264462   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:30.669561   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.671106   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:32.236488   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:34.736240   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:34.931264   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:37.430665   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:33.764540   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.265004   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:34.764934   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.264439   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:35.764982   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.264780   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:36.765081   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.264865   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:37.764612   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:37.764705   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:37.803674   79869 cri.go:89] found id: ""
	I0829 19:36:37.803704   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.803715   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:37.803724   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:37.803783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:37.836465   79869 cri.go:89] found id: ""
	I0829 19:36:37.836494   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.836504   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:37.836512   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:37.836574   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:37.870224   79869 cri.go:89] found id: ""
	I0829 19:36:37.870248   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.870256   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:37.870262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:37.870326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:37.904152   79869 cri.go:89] found id: ""
	I0829 19:36:37.904179   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.904187   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:37.904194   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:37.904267   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:37.939182   79869 cri.go:89] found id: ""
	I0829 19:36:37.939211   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.939220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:37.939228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:37.939293   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:37.975761   79869 cri.go:89] found id: ""
	I0829 19:36:37.975790   79869 logs.go:276] 0 containers: []
	W0829 19:36:37.975800   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:37.975808   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:37.975910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:38.008407   79869 cri.go:89] found id: ""
	I0829 19:36:38.008430   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.008437   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:38.008444   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:38.008497   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:38.041327   79869 cri.go:89] found id: ""
	I0829 19:36:38.041360   79869 logs.go:276] 0 containers: []
	W0829 19:36:38.041370   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:38.041381   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:38.041395   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:38.091167   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:38.091214   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:38.105093   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:38.105126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:38.227564   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:38.227599   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:38.227616   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:38.298287   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:38.298327   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:35.172336   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:37.671072   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:36.736855   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:38.736902   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:39.929907   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:41.930998   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:40.836221   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:40.849288   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:40.849357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:40.882705   79869 cri.go:89] found id: ""
	I0829 19:36:40.882732   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.882739   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:40.882745   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:40.882791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:40.917639   79869 cri.go:89] found id: ""
	I0829 19:36:40.917667   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.917679   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:40.917687   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:40.917738   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:40.953804   79869 cri.go:89] found id: ""
	I0829 19:36:40.953843   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.953854   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:40.953863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:40.953925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:40.987341   79869 cri.go:89] found id: ""
	I0829 19:36:40.987376   79869 logs.go:276] 0 containers: []
	W0829 19:36:40.987388   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:40.987396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:40.987462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:41.026247   79869 cri.go:89] found id: ""
	I0829 19:36:41.026277   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.026290   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:41.026303   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:41.026372   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:41.064160   79869 cri.go:89] found id: ""
	I0829 19:36:41.064185   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.064194   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:41.064201   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:41.064278   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:41.115081   79869 cri.go:89] found id: ""
	I0829 19:36:41.115113   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.115124   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:41.115131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:41.115206   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:41.165472   79869 cri.go:89] found id: ""
	I0829 19:36:41.165501   79869 logs.go:276] 0 containers: []
	W0829 19:36:41.165511   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:41.165521   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:41.165536   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:41.219322   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:41.219357   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:41.232410   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:41.232443   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:41.296216   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:41.296235   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:41.296246   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:41.375784   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:41.375824   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:40.169548   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:42.672996   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:41.236777   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.736150   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.931489   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:45.933439   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:48.431152   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:43.914181   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:43.926643   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:43.926716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:43.963266   79869 cri.go:89] found id: ""
	I0829 19:36:43.963289   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.963297   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:43.963303   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:43.963350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:43.998886   79869 cri.go:89] found id: ""
	I0829 19:36:43.998917   79869 logs.go:276] 0 containers: []
	W0829 19:36:43.998926   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:43.998930   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:43.998975   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:44.033142   79869 cri.go:89] found id: ""
	I0829 19:36:44.033174   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.033183   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:44.033189   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:44.033244   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:44.066986   79869 cri.go:89] found id: ""
	I0829 19:36:44.067019   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.067031   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:44.067038   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:44.067106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:44.100228   79869 cri.go:89] found id: ""
	I0829 19:36:44.100261   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.100272   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:44.100279   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:44.100340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:44.134511   79869 cri.go:89] found id: ""
	I0829 19:36:44.134536   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.134543   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:44.134549   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:44.134615   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:44.170586   79869 cri.go:89] found id: ""
	I0829 19:36:44.170619   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.170631   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:44.170639   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:44.170692   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:44.205349   79869 cri.go:89] found id: ""
	I0829 19:36:44.205377   79869 logs.go:276] 0 containers: []
	W0829 19:36:44.205388   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:44.205398   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:44.205413   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:44.218874   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:44.218903   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:44.294221   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:44.294241   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:44.294253   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:44.373258   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:44.373293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:44.414355   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:44.414384   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:46.964371   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:46.976756   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:46.976827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:47.009512   79869 cri.go:89] found id: ""
	I0829 19:36:47.009537   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.009547   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:47.009555   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:47.009608   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:47.042141   79869 cri.go:89] found id: ""
	I0829 19:36:47.042177   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.042190   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:47.042199   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:47.042265   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:47.074680   79869 cri.go:89] found id: ""
	I0829 19:36:47.074707   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.074718   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:47.074726   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:47.074783   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:47.107014   79869 cri.go:89] found id: ""
	I0829 19:36:47.107042   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.107051   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:47.107059   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:47.107107   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:47.139770   79869 cri.go:89] found id: ""
	I0829 19:36:47.139795   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.139804   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:47.139810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:47.139862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:47.174463   79869 cri.go:89] found id: ""
	I0829 19:36:47.174502   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.174521   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:47.174532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:47.174580   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:47.206935   79869 cri.go:89] found id: ""
	I0829 19:36:47.206958   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.206966   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:47.206972   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:47.207035   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:47.250798   79869 cri.go:89] found id: ""
	I0829 19:36:47.250822   79869 logs.go:276] 0 containers: []
	W0829 19:36:47.250829   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:47.250836   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:47.250847   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:47.320803   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:47.320824   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:47.320850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:47.394344   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:47.394379   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:47.439451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:47.439481   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:47.491070   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:47.491106   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:45.169686   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:47.169784   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:46.236187   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:48.736605   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.431543   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:52.931361   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.006196   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:50.020169   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:50.020259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:50.059323   79869 cri.go:89] found id: ""
	I0829 19:36:50.059353   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.059373   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:50.059380   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:50.059442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:50.095389   79869 cri.go:89] found id: ""
	I0829 19:36:50.095419   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.095430   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:50.095437   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:50.095499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:50.128133   79869 cri.go:89] found id: ""
	I0829 19:36:50.128162   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.128173   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:50.128180   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:50.128238   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:50.160999   79869 cri.go:89] found id: ""
	I0829 19:36:50.161021   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.161030   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:50.161035   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:50.161081   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:50.195246   79869 cri.go:89] found id: ""
	I0829 19:36:50.195268   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.195276   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:50.195282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:50.195329   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:50.229232   79869 cri.go:89] found id: ""
	I0829 19:36:50.229263   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.229273   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:50.229280   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:50.229340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:50.265141   79869 cri.go:89] found id: ""
	I0829 19:36:50.265169   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.265180   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:50.265188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:50.265251   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:50.299896   79869 cri.go:89] found id: ""
	I0829 19:36:50.299928   79869 logs.go:276] 0 containers: []
	W0829 19:36:50.299940   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:50.299949   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:50.299963   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:50.313408   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:50.313431   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:50.382019   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:50.382037   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:50.382049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:50.462174   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:50.462211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:50.499944   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:50.499971   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.050299   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:53.064866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:53.064963   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:53.098468   79869 cri.go:89] found id: ""
	I0829 19:36:53.098492   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.098500   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:53.098506   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:53.098555   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:53.130323   79869 cri.go:89] found id: ""
	I0829 19:36:53.130354   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.130377   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:53.130385   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:53.130445   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:53.175911   79869 cri.go:89] found id: ""
	I0829 19:36:53.175941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.175951   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:53.175968   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:53.176033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:53.209834   79869 cri.go:89] found id: ""
	I0829 19:36:53.209865   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.209874   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:53.209881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:53.209959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:53.246277   79869 cri.go:89] found id: ""
	I0829 19:36:53.246322   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.246332   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:53.246340   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:53.246401   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:53.283911   79869 cri.go:89] found id: ""
	I0829 19:36:53.283941   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.283953   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:53.283962   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:53.284024   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:53.315217   79869 cri.go:89] found id: ""
	I0829 19:36:53.315247   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.315257   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:53.315265   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:53.315328   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:53.348341   79869 cri.go:89] found id: ""
	I0829 19:36:53.348392   79869 logs.go:276] 0 containers: []
	W0829 19:36:53.348405   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:53.348417   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:53.348436   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:53.399841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:53.399879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:53.414453   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:53.414491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:53.490003   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:53.490023   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:53.490042   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:53.565162   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:53.565198   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:49.669984   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:52.168756   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:50.736642   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:53.236282   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:55.430710   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:57.430791   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:56.106051   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:56.119263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:56.119345   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:56.160104   79869 cri.go:89] found id: ""
	I0829 19:36:56.160131   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.160138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:56.160144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:56.160192   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:56.196028   79869 cri.go:89] found id: ""
	I0829 19:36:56.196054   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.196062   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:56.196067   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:56.196113   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:56.229503   79869 cri.go:89] found id: ""
	I0829 19:36:56.229532   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.229539   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:56.229553   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:56.229602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:56.263904   79869 cri.go:89] found id: ""
	I0829 19:36:56.263934   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.263944   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:56.263951   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:56.264013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:56.295579   79869 cri.go:89] found id: ""
	I0829 19:36:56.295607   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.295618   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:56.295625   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:56.295680   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:56.328514   79869 cri.go:89] found id: ""
	I0829 19:36:56.328548   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.328556   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:56.328563   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:56.328620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:56.361388   79869 cri.go:89] found id: ""
	I0829 19:36:56.361418   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.361426   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:56.361431   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:56.361508   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:56.393312   79869 cri.go:89] found id: ""
	I0829 19:36:56.393345   79869 logs.go:276] 0 containers: []
	W0829 19:36:56.393354   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:56.393362   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:56.393372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:56.446431   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:56.446472   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:36:56.459086   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:56.459112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:56.525526   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:56.525554   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:56.525569   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:56.609554   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:56.609592   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:54.169625   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:56.169688   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:58.170249   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:55.736101   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:58.235887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:00.236133   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:59.931992   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.430785   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:36:59.148291   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:36:59.162462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:36:59.162524   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:36:59.199732   79869 cri.go:89] found id: ""
	I0829 19:36:59.199761   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.199771   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:36:59.199780   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:36:59.199861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:36:59.232285   79869 cri.go:89] found id: ""
	I0829 19:36:59.232324   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.232335   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:36:59.232345   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:36:59.232415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:36:59.266424   79869 cri.go:89] found id: ""
	I0829 19:36:59.266452   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.266463   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:36:59.266471   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:36:59.266536   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:36:59.306707   79869 cri.go:89] found id: ""
	I0829 19:36:59.306733   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.306742   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:36:59.306748   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:36:59.306807   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:36:59.345114   79869 cri.go:89] found id: ""
	I0829 19:36:59.345144   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.345154   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:36:59.345162   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:36:59.345225   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:36:59.382940   79869 cri.go:89] found id: ""
	I0829 19:36:59.382963   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.382971   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:36:59.382977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:36:59.383031   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:36:59.420066   79869 cri.go:89] found id: ""
	I0829 19:36:59.420088   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.420095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:36:59.420101   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:36:59.420146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:36:59.457355   79869 cri.go:89] found id: ""
	I0829 19:36:59.457377   79869 logs.go:276] 0 containers: []
	W0829 19:36:59.457385   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:36:59.457392   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:36:59.457409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:36:59.528868   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:36:59.528893   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:36:59.528908   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:36:59.612849   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:36:59.612886   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:36:59.649036   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:36:59.649064   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:36:59.703071   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:36:59.703105   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.216020   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:02.229270   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:02.229351   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:02.266857   79869 cri.go:89] found id: ""
	I0829 19:37:02.266885   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.266897   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:02.266904   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:02.266967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:02.304473   79869 cri.go:89] found id: ""
	I0829 19:37:02.304501   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.304512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:02.304520   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:02.304590   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:02.338219   79869 cri.go:89] found id: ""
	I0829 19:37:02.338244   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.338253   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:02.338261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:02.338323   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:02.370974   79869 cri.go:89] found id: ""
	I0829 19:37:02.371006   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.371017   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:02.371025   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:02.371084   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:02.405871   79869 cri.go:89] found id: ""
	I0829 19:37:02.405895   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.405902   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:02.405908   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:02.405955   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:02.438516   79869 cri.go:89] found id: ""
	I0829 19:37:02.438543   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.438554   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:02.438568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:02.438630   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:02.471180   79869 cri.go:89] found id: ""
	I0829 19:37:02.471205   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.471213   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:02.471218   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:02.471276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:02.503203   79869 cri.go:89] found id: ""
	I0829 19:37:02.503227   79869 logs.go:276] 0 containers: []
	W0829 19:37:02.503237   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:02.503248   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:02.503262   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:02.555303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:02.555337   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:02.567903   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:02.567927   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:02.641377   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:02.641403   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:02.641418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:02.717475   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:02.717522   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:00.669482   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.669691   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:02.237155   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:04.237334   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:04.431033   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:06.431419   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.431901   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:05.257326   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:05.270641   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:05.270717   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:05.303873   79869 cri.go:89] found id: ""
	I0829 19:37:05.303901   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.303909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:05.303915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:05.303959   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:05.345153   79869 cri.go:89] found id: ""
	I0829 19:37:05.345176   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.345184   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:05.345189   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:05.345245   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:05.379032   79869 cri.go:89] found id: ""
	I0829 19:37:05.379059   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.379067   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:05.379073   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:05.379135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:05.412432   79869 cri.go:89] found id: ""
	I0829 19:37:05.412465   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.412476   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:05.412484   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:05.412538   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:05.445441   79869 cri.go:89] found id: ""
	I0829 19:37:05.445464   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.445471   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:05.445477   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:05.445527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:05.478921   79869 cri.go:89] found id: ""
	I0829 19:37:05.478949   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.478957   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:05.478964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:05.479011   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:05.509821   79869 cri.go:89] found id: ""
	I0829 19:37:05.509849   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.509859   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:05.509866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:05.509924   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:05.541409   79869 cri.go:89] found id: ""
	I0829 19:37:05.541435   79869 logs.go:276] 0 containers: []
	W0829 19:37:05.541443   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:05.541451   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:05.541464   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.590569   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:05.590601   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:05.604071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:05.604101   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:05.685233   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:05.685262   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:05.685277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:05.761082   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:05.761112   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.299816   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:08.312964   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:08.313037   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:08.344710   79869 cri.go:89] found id: ""
	I0829 19:37:08.344737   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.344745   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:08.344755   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:08.344820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:08.378185   79869 cri.go:89] found id: ""
	I0829 19:37:08.378210   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.378217   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:08.378223   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:08.378272   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:08.410619   79869 cri.go:89] found id: ""
	I0829 19:37:08.410645   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.410663   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:08.410670   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:08.410729   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:08.445494   79869 cri.go:89] found id: ""
	I0829 19:37:08.445522   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.445531   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:08.445540   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:08.445601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:08.478225   79869 cri.go:89] found id: ""
	I0829 19:37:08.478249   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.478258   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:08.478263   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:08.478311   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:08.512006   79869 cri.go:89] found id: ""
	I0829 19:37:08.512032   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.512042   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:08.512049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:08.512111   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:08.546800   79869 cri.go:89] found id: ""
	I0829 19:37:08.546831   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.546841   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:08.546848   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:08.546911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:08.580353   79869 cri.go:89] found id: ""
	I0829 19:37:08.580383   79869 logs.go:276] 0 containers: []
	W0829 19:37:08.580394   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:08.580405   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:08.580418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:08.661004   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:08.661041   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:08.708548   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:08.708581   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:05.168832   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:07.669695   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:06.736029   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.736415   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:10.930895   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:13.430209   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:08.761385   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:08.761418   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:08.774365   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:08.774392   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:08.839864   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.340781   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:11.353417   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:11.353492   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:11.388836   79869 cri.go:89] found id: ""
	I0829 19:37:11.388864   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.388873   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:11.388879   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:11.388925   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:11.429655   79869 cri.go:89] found id: ""
	I0829 19:37:11.429685   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.429695   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:11.429703   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:11.429761   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:11.462122   79869 cri.go:89] found id: ""
	I0829 19:37:11.462157   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.462166   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:11.462174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:11.462236   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:11.495955   79869 cri.go:89] found id: ""
	I0829 19:37:11.495985   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.495996   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:11.496003   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:11.496063   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:11.529394   79869 cri.go:89] found id: ""
	I0829 19:37:11.529427   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.529438   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:11.529446   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:11.529513   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:11.565804   79869 cri.go:89] found id: ""
	I0829 19:37:11.565830   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.565838   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:11.565844   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:11.565903   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:11.601786   79869 cri.go:89] found id: ""
	I0829 19:37:11.601815   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.601825   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:11.601832   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:11.601889   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:11.638213   79869 cri.go:89] found id: ""
	I0829 19:37:11.638234   79869 logs.go:276] 0 containers: []
	W0829 19:37:11.638242   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:11.638250   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:11.638260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:11.651085   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:11.651113   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:11.716834   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:11.716858   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:11.716872   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:11.804266   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:11.804310   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:11.846655   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:11.846684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:10.168947   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:12.669439   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:11.236100   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:13.236138   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:15.930954   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.931355   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:14.408512   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:14.420973   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:14.421033   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:14.456516   79869 cri.go:89] found id: ""
	I0829 19:37:14.456540   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.456548   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:14.456553   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:14.456604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:14.489480   79869 cri.go:89] found id: ""
	I0829 19:37:14.489502   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.489512   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:14.489517   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:14.489562   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:14.521821   79869 cri.go:89] found id: ""
	I0829 19:37:14.521849   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.521857   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:14.521863   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:14.521911   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:14.557084   79869 cri.go:89] found id: ""
	I0829 19:37:14.557116   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.557125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:14.557131   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:14.557180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:14.590979   79869 cri.go:89] found id: ""
	I0829 19:37:14.591009   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.591019   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:14.591027   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:14.591088   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:14.624022   79869 cri.go:89] found id: ""
	I0829 19:37:14.624047   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.624057   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:14.624066   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:14.624131   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:14.656100   79869 cri.go:89] found id: ""
	I0829 19:37:14.656133   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.656145   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:14.656153   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:14.656214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:14.694241   79869 cri.go:89] found id: ""
	I0829 19:37:14.694276   79869 logs.go:276] 0 containers: []
	W0829 19:37:14.694289   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:14.694302   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:14.694317   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.748276   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:14.748312   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:14.761340   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:14.761361   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:14.834815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:14.834842   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:14.834857   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:14.909857   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:14.909898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.453264   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:17.466704   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:17.466776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:17.500163   79869 cri.go:89] found id: ""
	I0829 19:37:17.500193   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.500205   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:17.500212   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:17.500269   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:17.532155   79869 cri.go:89] found id: ""
	I0829 19:37:17.532182   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.532192   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:17.532200   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:17.532259   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:17.564710   79869 cri.go:89] found id: ""
	I0829 19:37:17.564737   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.564747   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:17.564754   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:17.564816   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:17.597056   79869 cri.go:89] found id: ""
	I0829 19:37:17.597091   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.597103   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:17.597111   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:17.597173   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:17.633398   79869 cri.go:89] found id: ""
	I0829 19:37:17.633424   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.633434   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:17.633442   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:17.633506   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:17.666201   79869 cri.go:89] found id: ""
	I0829 19:37:17.666243   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.666254   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:17.666262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:17.666324   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:17.700235   79869 cri.go:89] found id: ""
	I0829 19:37:17.700259   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.700266   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:17.700273   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:17.700320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:17.732060   79869 cri.go:89] found id: ""
	I0829 19:37:17.732090   79869 logs.go:276] 0 containers: []
	W0829 19:37:17.732100   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:17.732110   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:17.732126   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:17.747071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:17.747107   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:17.816644   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:17.816665   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:17.816677   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:17.895084   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:17.895134   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:17.935093   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:17.935125   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:14.669895   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.170115   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:15.736101   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:17.736304   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:19.736492   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:20.429878   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:22.430233   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:20.484693   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:20.497977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:20.498043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:20.531361   79869 cri.go:89] found id: ""
	I0829 19:37:20.531389   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.531400   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:20.531408   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:20.531469   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:20.569556   79869 cri.go:89] found id: ""
	I0829 19:37:20.569583   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.569594   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:20.569603   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:20.569668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:20.602350   79869 cri.go:89] found id: ""
	I0829 19:37:20.602377   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.602385   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:20.602391   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:20.602448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:20.637274   79869 cri.go:89] found id: ""
	I0829 19:37:20.637305   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.637319   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:20.637327   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:20.637388   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:20.686169   79869 cri.go:89] found id: ""
	I0829 19:37:20.686196   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.686204   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:20.686210   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:20.686257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:20.722745   79869 cri.go:89] found id: ""
	I0829 19:37:20.722775   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.722786   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:20.722794   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:20.722856   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:20.757314   79869 cri.go:89] found id: ""
	I0829 19:37:20.757337   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.757344   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:20.757349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:20.757398   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:20.790802   79869 cri.go:89] found id: ""
	I0829 19:37:20.790834   79869 logs.go:276] 0 containers: []
	W0829 19:37:20.790844   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:20.790855   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:20.790870   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:20.840866   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:20.840898   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:20.854053   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:20.854098   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:20.921717   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:20.921746   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:20.921761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:21.003362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:21.003398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:23.541356   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:23.554621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:23.554699   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:23.588155   79869 cri.go:89] found id: ""
	I0829 19:37:23.588190   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.588199   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:23.588207   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:23.588273   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:23.622917   79869 cri.go:89] found id: ""
	I0829 19:37:23.622945   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.622954   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:23.622960   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:23.623016   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:23.658615   79869 cri.go:89] found id: ""
	I0829 19:37:23.658648   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.658657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:23.658663   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:23.658720   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:23.693196   79869 cri.go:89] found id: ""
	I0829 19:37:23.693224   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.693234   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:23.693242   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:23.693309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:23.728285   79869 cri.go:89] found id: ""
	I0829 19:37:23.728317   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.728328   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:23.728336   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:23.728399   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:19.668651   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:21.669949   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:23.670402   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:22.235749   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:24.236078   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:24.431492   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:26.930440   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:23.763713   79869 cri.go:89] found id: ""
	I0829 19:37:23.763741   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.763751   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:23.763759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:23.763812   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:23.797776   79869 cri.go:89] found id: ""
	I0829 19:37:23.797801   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.797809   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:23.797814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:23.797863   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:23.832108   79869 cri.go:89] found id: ""
	I0829 19:37:23.832139   79869 logs.go:276] 0 containers: []
	W0829 19:37:23.832151   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:23.832161   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:23.832175   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:23.880460   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:23.880490   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:23.893251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:23.893280   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:23.962079   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:23.962127   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:23.962140   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:24.048048   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:24.048088   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:26.593169   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:26.606349   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:26.606426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:26.643119   79869 cri.go:89] found id: ""
	I0829 19:37:26.643143   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.643155   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:26.643161   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:26.643216   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:26.681555   79869 cri.go:89] found id: ""
	I0829 19:37:26.681579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.681591   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:26.681597   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:26.681655   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:26.718440   79869 cri.go:89] found id: ""
	I0829 19:37:26.718469   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.718479   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:26.718486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:26.718549   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:26.755249   79869 cri.go:89] found id: ""
	I0829 19:37:26.755274   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.755284   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:26.755292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:26.755356   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:26.790554   79869 cri.go:89] found id: ""
	I0829 19:37:26.790579   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.790590   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:26.790597   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:26.790665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:26.826492   79869 cri.go:89] found id: ""
	I0829 19:37:26.826521   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.826530   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:26.826537   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:26.826600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:26.863456   79869 cri.go:89] found id: ""
	I0829 19:37:26.863487   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.863499   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:26.863508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:26.863579   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:26.897637   79869 cri.go:89] found id: ""
	I0829 19:37:26.897670   79869 logs.go:276] 0 containers: []
	W0829 19:37:26.897683   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:26.897694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:26.897709   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:26.978362   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:26.978400   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:27.016212   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:27.016245   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:27.078350   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:27.078386   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:27.101701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:27.101744   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:27.186720   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:26.168605   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:28.170938   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:26.735518   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:28.737503   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:29.431222   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:31.931202   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:29.686902   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:29.699814   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:29.699885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:29.733867   79869 cri.go:89] found id: ""
	I0829 19:37:29.733893   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.733904   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:29.733911   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:29.733970   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:29.767910   79869 cri.go:89] found id: ""
	I0829 19:37:29.767937   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.767946   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:29.767952   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:29.767998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:29.801085   79869 cri.go:89] found id: ""
	I0829 19:37:29.801109   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.801117   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:29.801122   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:29.801166   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:29.834215   79869 cri.go:89] found id: ""
	I0829 19:37:29.834238   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.834246   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:29.834251   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:29.834307   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:29.872761   79869 cri.go:89] found id: ""
	I0829 19:37:29.872785   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.872793   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:29.872803   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:29.872847   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:29.909354   79869 cri.go:89] found id: ""
	I0829 19:37:29.909385   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.909395   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:29.909408   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:29.909468   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:29.941359   79869 cri.go:89] found id: ""
	I0829 19:37:29.941383   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.941390   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:29.941396   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:29.941451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:29.973694   79869 cri.go:89] found id: ""
	I0829 19:37:29.973726   79869 logs.go:276] 0 containers: []
	W0829 19:37:29.973736   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:29.973746   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:29.973761   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:30.024863   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:30.024896   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.039092   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:30.039119   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:30.106106   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:30.106128   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:30.106143   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:30.183254   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:30.183289   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:32.722665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:32.736188   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:32.736261   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:32.773039   79869 cri.go:89] found id: ""
	I0829 19:37:32.773065   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.773073   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:32.773082   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:32.773144   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:32.818204   79869 cri.go:89] found id: ""
	I0829 19:37:32.818234   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.818245   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:32.818252   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:32.818313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:32.862902   79869 cri.go:89] found id: ""
	I0829 19:37:32.862932   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.862942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:32.862949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:32.863009   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:32.908338   79869 cri.go:89] found id: ""
	I0829 19:37:32.908369   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.908380   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:32.908388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:32.908452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:32.941717   79869 cri.go:89] found id: ""
	I0829 19:37:32.941746   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.941757   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:32.941765   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:32.941827   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:32.975777   79869 cri.go:89] found id: ""
	I0829 19:37:32.975806   79869 logs.go:276] 0 containers: []
	W0829 19:37:32.975818   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:32.975827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:32.975885   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:33.007518   79869 cri.go:89] found id: ""
	I0829 19:37:33.007551   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.007563   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:33.007570   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:33.007638   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:33.039902   79869 cri.go:89] found id: ""
	I0829 19:37:33.039924   79869 logs.go:276] 0 containers: []
	W0829 19:37:33.039931   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:33.039946   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:33.039958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:33.111691   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:33.111720   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:33.111734   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:33.191036   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:33.191067   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:33.228850   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:33.228882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:33.282314   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:33.282351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:30.668490   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:32.669630   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:31.235788   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:33.735661   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:33.931996   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.932964   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:38.429817   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.796597   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:35.809357   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:35.809437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:35.841747   79869 cri.go:89] found id: ""
	I0829 19:37:35.841774   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.841783   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:35.841792   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:35.841850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:35.875614   79869 cri.go:89] found id: ""
	I0829 19:37:35.875639   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.875650   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:35.875657   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:35.875718   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:35.910547   79869 cri.go:89] found id: ""
	I0829 19:37:35.910571   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.910579   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:35.910585   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:35.910647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:35.949505   79869 cri.go:89] found id: ""
	I0829 19:37:35.949526   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.949533   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:35.949538   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:35.949583   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:35.984331   79869 cri.go:89] found id: ""
	I0829 19:37:35.984369   79869 logs.go:276] 0 containers: []
	W0829 19:37:35.984381   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:35.984388   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:35.984451   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:36.018870   79869 cri.go:89] found id: ""
	I0829 19:37:36.018897   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.018909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:36.018917   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:36.018976   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:36.053557   79869 cri.go:89] found id: ""
	I0829 19:37:36.053593   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.053603   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:36.053611   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:36.053668   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:36.087217   79869 cri.go:89] found id: ""
	I0829 19:37:36.087243   79869 logs.go:276] 0 containers: []
	W0829 19:37:36.087254   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:36.087264   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:36.087282   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:36.141546   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:36.141577   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:36.155496   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:36.155524   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:36.225014   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:36.225038   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:36.225052   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:36.304399   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:36.304442   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:35.168843   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:37.169415   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:35.736103   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:37.736554   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:40.235995   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:40.430698   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.430836   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:38.842368   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:38.856085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:38.856160   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:38.893989   79869 cri.go:89] found id: ""
	I0829 19:37:38.894016   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.894024   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:38.894030   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:38.894075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:38.926756   79869 cri.go:89] found id: ""
	I0829 19:37:38.926784   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.926792   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:38.926798   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:38.926859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:38.966346   79869 cri.go:89] found id: ""
	I0829 19:37:38.966370   79869 logs.go:276] 0 containers: []
	W0829 19:37:38.966379   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:38.966385   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:38.966442   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:39.000266   79869 cri.go:89] found id: ""
	I0829 19:37:39.000291   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.000298   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:39.000307   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:39.000355   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:39.037243   79869 cri.go:89] found id: ""
	I0829 19:37:39.037269   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.037277   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:39.037282   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:39.037347   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:39.068823   79869 cri.go:89] found id: ""
	I0829 19:37:39.068852   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.068864   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:39.068872   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:39.068936   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:39.099649   79869 cri.go:89] found id: ""
	I0829 19:37:39.099674   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.099682   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:39.099689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:39.099748   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:39.131764   79869 cri.go:89] found id: ""
	I0829 19:37:39.131786   79869 logs.go:276] 0 containers: []
	W0829 19:37:39.131794   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:39.131802   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:39.131814   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:39.188087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:39.188123   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:39.200989   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:39.201015   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:39.279230   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:39.279257   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:39.279271   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:39.358667   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:39.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:41.897833   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:41.911145   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:41.911219   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:41.947096   79869 cri.go:89] found id: ""
	I0829 19:37:41.947122   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.947133   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:41.947141   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:41.947203   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:41.984267   79869 cri.go:89] found id: ""
	I0829 19:37:41.984301   79869 logs.go:276] 0 containers: []
	W0829 19:37:41.984309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:41.984315   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:41.984384   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:42.018170   79869 cri.go:89] found id: ""
	I0829 19:37:42.018198   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.018209   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:42.018217   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:42.018281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:42.058245   79869 cri.go:89] found id: ""
	I0829 19:37:42.058269   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.058278   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:42.058283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:42.058327   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:42.093182   79869 cri.go:89] found id: ""
	I0829 19:37:42.093214   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.093226   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:42.093233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:42.093299   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:42.126013   79869 cri.go:89] found id: ""
	I0829 19:37:42.126041   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.126050   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:42.126058   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:42.126136   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:42.166568   79869 cri.go:89] found id: ""
	I0829 19:37:42.166660   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.166675   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:42.166683   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:42.166763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:42.204904   79869 cri.go:89] found id: ""
	I0829 19:37:42.204930   79869 logs.go:276] 0 containers: []
	W0829 19:37:42.204938   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:42.204947   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:42.204960   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:42.262487   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:42.262533   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:42.275703   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:42.275730   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:42.341375   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:42.341394   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:42.341408   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:42.420981   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:42.421021   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:39.670059   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.169724   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:42.237785   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.736417   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.929743   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:46.930603   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:44.965267   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:44.979151   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:44.979204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:45.020423   79869 cri.go:89] found id: ""
	I0829 19:37:45.020448   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.020456   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:45.020461   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:45.020521   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:45.058200   79869 cri.go:89] found id: ""
	I0829 19:37:45.058225   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.058233   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:45.058238   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:45.058286   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:45.093886   79869 cri.go:89] found id: ""
	I0829 19:37:45.093909   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.093917   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:45.093923   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:45.093968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:45.127630   79869 cri.go:89] found id: ""
	I0829 19:37:45.127663   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.127674   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:45.127681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:45.127742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:45.160643   79869 cri.go:89] found id: ""
	I0829 19:37:45.160669   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.160679   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:45.160685   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:45.160742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:45.196010   79869 cri.go:89] found id: ""
	I0829 19:37:45.196035   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.196043   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:45.196050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:45.196101   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:45.229297   79869 cri.go:89] found id: ""
	I0829 19:37:45.229375   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.229395   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:45.229405   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:45.229461   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:45.267244   79869 cri.go:89] found id: ""
	I0829 19:37:45.267271   79869 logs.go:276] 0 containers: []
	W0829 19:37:45.267281   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:45.267292   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:45.267306   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:45.280179   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:45.280201   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:45.352318   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:45.352339   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:45.352351   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:45.432702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:45.432732   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:45.470540   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:45.470564   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.019771   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:48.032745   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:48.032819   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:48.066895   79869 cri.go:89] found id: ""
	I0829 19:37:48.066921   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.066930   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:48.066938   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:48.066998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:48.104824   79869 cri.go:89] found id: ""
	I0829 19:37:48.104853   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.104861   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:48.104866   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:48.104931   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:48.140964   79869 cri.go:89] found id: ""
	I0829 19:37:48.140990   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.140998   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:48.141004   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:48.141051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:48.174550   79869 cri.go:89] found id: ""
	I0829 19:37:48.174578   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.174587   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:48.174593   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:48.174647   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:48.207397   79869 cri.go:89] found id: ""
	I0829 19:37:48.207422   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.207430   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:48.207437   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:48.207495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:48.240948   79869 cri.go:89] found id: ""
	I0829 19:37:48.240970   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.240978   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:48.240983   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:48.241027   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:48.281058   79869 cri.go:89] found id: ""
	I0829 19:37:48.281087   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.281095   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:48.281100   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:48.281151   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:48.315511   79869 cri.go:89] found id: ""
	I0829 19:37:48.315541   79869 logs.go:276] 0 containers: []
	W0829 19:37:48.315552   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:48.315564   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:48.315580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:48.367680   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:48.367714   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:48.380251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:48.380285   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:48.449432   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:48.449452   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:48.449467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:48.525529   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:48.525563   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:44.669068   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:47.169440   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:46.737461   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:49.236079   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:49.431026   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.931134   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.064580   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:51.077351   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:51.077430   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:51.110018   79869 cri.go:89] found id: ""
	I0829 19:37:51.110049   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.110058   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:51.110063   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:51.110138   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:51.143667   79869 cri.go:89] found id: ""
	I0829 19:37:51.143700   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.143711   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:51.143719   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:51.143791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:51.178193   79869 cri.go:89] found id: ""
	I0829 19:37:51.178221   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.178229   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:51.178235   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:51.178285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:51.212323   79869 cri.go:89] found id: ""
	I0829 19:37:51.212352   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.212359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:51.212366   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:51.212413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:51.245724   79869 cri.go:89] found id: ""
	I0829 19:37:51.245745   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.245752   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:51.245758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:51.245832   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:51.278424   79869 cri.go:89] found id: ""
	I0829 19:37:51.278448   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.278456   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:51.278462   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:51.278509   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:51.309469   79869 cri.go:89] found id: ""
	I0829 19:37:51.309498   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.309508   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:51.309516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:51.309602   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:51.342596   79869 cri.go:89] found id: ""
	I0829 19:37:51.342625   79869 logs.go:276] 0 containers: []
	W0829 19:37:51.342639   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:51.342650   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:51.342664   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:51.394045   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:51.394083   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:51.407902   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:51.407934   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:51.480759   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:51.480782   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:51.480797   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:51.565533   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:51.565570   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:49.671574   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:52.168702   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:51.237371   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:53.736122   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:54.430278   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.431024   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:54.107142   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:54.121083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:54.121141   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:54.156019   79869 cri.go:89] found id: ""
	I0829 19:37:54.156042   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.156050   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:54.156056   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:54.156106   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:54.188748   79869 cri.go:89] found id: ""
	I0829 19:37:54.188772   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.188783   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:54.188790   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:54.188851   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:54.222044   79869 cri.go:89] found id: ""
	I0829 19:37:54.222079   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.222112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:54.222132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:54.222214   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:54.254710   79869 cri.go:89] found id: ""
	I0829 19:37:54.254740   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.254750   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:54.254759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:54.254820   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:54.292053   79869 cri.go:89] found id: ""
	I0829 19:37:54.292078   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.292086   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:54.292092   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:54.292153   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:54.330528   79869 cri.go:89] found id: ""
	I0829 19:37:54.330561   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.330573   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:54.330580   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:54.330653   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:54.363571   79869 cri.go:89] found id: ""
	I0829 19:37:54.363594   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.363602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:54.363608   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:54.363669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:54.395112   79869 cri.go:89] found id: ""
	I0829 19:37:54.395144   79869 logs.go:276] 0 containers: []
	W0829 19:37:54.395166   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:54.395178   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:54.395192   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:54.408701   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:54.408729   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:54.474198   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:54.474218   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:54.474231   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:54.555430   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:54.555467   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.592858   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:54.592893   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.144165   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:37:57.157368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:37:57.157437   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:37:57.194662   79869 cri.go:89] found id: ""
	I0829 19:37:57.194693   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.194706   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:37:57.194721   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:37:57.194784   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:37:57.226822   79869 cri.go:89] found id: ""
	I0829 19:37:57.226848   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.226856   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:37:57.226862   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:37:57.226910   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:37:57.263892   79869 cri.go:89] found id: ""
	I0829 19:37:57.263932   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.263945   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:37:57.263955   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:37:57.264018   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:37:57.301202   79869 cri.go:89] found id: ""
	I0829 19:37:57.301243   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.301255   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:37:57.301261   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:37:57.301317   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:37:57.335291   79869 cri.go:89] found id: ""
	I0829 19:37:57.335321   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.335337   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:37:57.335343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:37:57.335392   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:37:57.368961   79869 cri.go:89] found id: ""
	I0829 19:37:57.368983   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.368992   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:37:57.368997   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:37:57.369042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:37:57.401813   79869 cri.go:89] found id: ""
	I0829 19:37:57.401837   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.401844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:37:57.401850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:37:57.401906   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:37:57.434719   79869 cri.go:89] found id: ""
	I0829 19:37:57.434745   79869 logs.go:276] 0 containers: []
	W0829 19:37:57.434756   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:37:57.434765   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:37:57.434777   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:37:57.484182   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:37:57.484217   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:37:57.497025   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:37:57.497051   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:37:57.569752   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:37:57.569776   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:37:57.569789   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:57.651276   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:37:57.651324   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:37:54.169824   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.668831   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:56.236564   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:58.736176   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:37:58.930996   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.931806   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.430980   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.189981   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:00.204723   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:00.204794   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:00.241677   79869 cri.go:89] found id: ""
	I0829 19:38:00.241700   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.241707   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:00.241713   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:00.241768   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:00.278692   79869 cri.go:89] found id: ""
	I0829 19:38:00.278726   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.278736   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:00.278744   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:00.278801   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:00.310418   79869 cri.go:89] found id: ""
	I0829 19:38:00.310448   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.310459   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:00.310466   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:00.310528   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:00.348423   79869 cri.go:89] found id: ""
	I0829 19:38:00.348446   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.348453   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:00.348459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:00.348511   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:00.380954   79869 cri.go:89] found id: ""
	I0829 19:38:00.380978   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.380985   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:00.380991   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:00.381043   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:00.414783   79869 cri.go:89] found id: ""
	I0829 19:38:00.414812   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.414823   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:00.414831   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:00.414895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:00.450606   79869 cri.go:89] found id: ""
	I0829 19:38:00.450634   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.450642   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:00.450647   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:00.450696   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:00.485337   79869 cri.go:89] found id: ""
	I0829 19:38:00.485360   79869 logs.go:276] 0 containers: []
	W0829 19:38:00.485375   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:00.485382   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:00.485399   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:00.551481   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:00.551502   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:00.551513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:00.630781   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:00.630819   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:00.676339   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:00.676363   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:00.728420   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:00.728452   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.243268   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:03.256259   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:03.256359   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:03.291103   79869 cri.go:89] found id: ""
	I0829 19:38:03.291131   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.291138   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:03.291144   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:03.291190   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:03.327866   79869 cri.go:89] found id: ""
	I0829 19:38:03.327898   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.327909   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:03.327917   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:03.327986   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:03.359082   79869 cri.go:89] found id: ""
	I0829 19:38:03.359110   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.359121   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:03.359129   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:03.359183   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:03.392714   79869 cri.go:89] found id: ""
	I0829 19:38:03.392741   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.392751   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:03.392758   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:03.392823   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:03.427785   79869 cri.go:89] found id: ""
	I0829 19:38:03.427812   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.427820   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:03.427827   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:03.427888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:03.463136   79869 cri.go:89] found id: ""
	I0829 19:38:03.463161   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.463171   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:03.463177   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:03.463230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:03.496188   79869 cri.go:89] found id: ""
	I0829 19:38:03.496225   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.496237   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:03.496244   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:03.496295   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:03.529566   79869 cri.go:89] found id: ""
	I0829 19:38:03.529591   79869 logs.go:276] 0 containers: []
	W0829 19:38:03.529600   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:03.529609   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:03.529619   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:03.584787   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:03.584828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:03.599464   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:03.599509   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:03.676743   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:03.676763   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:03.676774   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:37:59.169059   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:01.668656   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.669716   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:00.736901   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.236263   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:05.431293   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:07.930953   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:03.757552   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:03.757605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.297887   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:06.311413   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:06.311498   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:06.345494   79869 cri.go:89] found id: ""
	I0829 19:38:06.345529   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.345539   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:06.345546   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:06.345605   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:06.377646   79869 cri.go:89] found id: ""
	I0829 19:38:06.377680   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.377691   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:06.377698   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:06.377809   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:06.416770   79869 cri.go:89] found id: ""
	I0829 19:38:06.416799   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.416810   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:06.416817   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:06.416869   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:06.451995   79869 cri.go:89] found id: ""
	I0829 19:38:06.452024   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.452034   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:06.452040   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:06.452095   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:06.484604   79869 cri.go:89] found id: ""
	I0829 19:38:06.484631   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.484642   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:06.484650   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:06.484713   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:06.517955   79869 cri.go:89] found id: ""
	I0829 19:38:06.517981   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.517988   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:06.517994   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:06.518053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:06.551069   79869 cri.go:89] found id: ""
	I0829 19:38:06.551100   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.551111   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:06.551118   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:06.551178   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:06.585340   79869 cri.go:89] found id: ""
	I0829 19:38:06.585367   79869 logs.go:276] 0 containers: []
	W0829 19:38:06.585379   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:06.585389   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:06.585416   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:06.637942   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:06.637977   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:06.652097   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:06.652124   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:06.738226   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:06.738252   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:06.738268   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:06.817478   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:06.817519   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:06.168530   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:08.169657   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:05.736429   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:08.236731   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:09.931677   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.431484   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:09.360441   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:09.373372   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:09.373431   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:09.409942   79869 cri.go:89] found id: ""
	I0829 19:38:09.409970   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.409981   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:09.409989   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:09.410050   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:09.444611   79869 cri.go:89] found id: ""
	I0829 19:38:09.444639   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.444647   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:09.444652   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:09.444701   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:09.478206   79869 cri.go:89] found id: ""
	I0829 19:38:09.478233   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.478240   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:09.478246   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:09.478305   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:09.510313   79869 cri.go:89] found id: ""
	I0829 19:38:09.510340   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.510356   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:09.510361   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:09.510419   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:09.545380   79869 cri.go:89] found id: ""
	I0829 19:38:09.545412   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.545422   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:09.545429   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:09.545495   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:09.578560   79869 cri.go:89] found id: ""
	I0829 19:38:09.578591   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.578600   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:09.578606   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:09.578659   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:09.613445   79869 cri.go:89] found id: ""
	I0829 19:38:09.613476   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.613484   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:09.613490   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:09.613540   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:09.649933   79869 cri.go:89] found id: ""
	I0829 19:38:09.649961   79869 logs.go:276] 0 containers: []
	W0829 19:38:09.649970   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:09.649981   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:09.649998   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:09.662471   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:09.662496   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:09.728562   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:09.728594   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:09.728610   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:09.813152   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:09.813187   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:09.852846   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:09.852879   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.403437   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:12.429787   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:12.429872   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:12.470833   79869 cri.go:89] found id: ""
	I0829 19:38:12.470858   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.470866   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:12.470871   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:12.470947   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:12.502307   79869 cri.go:89] found id: ""
	I0829 19:38:12.502334   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.502343   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:12.502351   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:12.502411   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:12.535084   79869 cri.go:89] found id: ""
	I0829 19:38:12.535108   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.535114   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:12.535120   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:12.535182   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:12.571735   79869 cri.go:89] found id: ""
	I0829 19:38:12.571762   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.571772   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:12.571779   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:12.571838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:12.604987   79869 cri.go:89] found id: ""
	I0829 19:38:12.605020   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.605029   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:12.605036   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:12.605093   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:12.639257   79869 cri.go:89] found id: ""
	I0829 19:38:12.639281   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.639289   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:12.639300   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:12.639362   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:12.674790   79869 cri.go:89] found id: ""
	I0829 19:38:12.674811   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.674818   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:12.674824   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:12.674877   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:12.711132   79869 cri.go:89] found id: ""
	I0829 19:38:12.711156   79869 logs.go:276] 0 containers: []
	W0829 19:38:12.711164   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:12.711172   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:12.711184   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:12.763916   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:12.763950   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:12.777071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:12.777100   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:12.844974   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:12.845002   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:12.845017   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:12.924646   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:12.924682   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:10.668769   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.669771   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:10.736651   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:12.737433   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:15.236521   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:14.930832   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:16.931496   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:15.465319   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:15.478237   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:15.478315   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:15.510066   79869 cri.go:89] found id: ""
	I0829 19:38:15.510113   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.510124   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:15.510132   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:15.510180   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:15.543094   79869 cri.go:89] found id: ""
	I0829 19:38:15.543117   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.543125   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:15.543138   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:15.543189   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:15.577253   79869 cri.go:89] found id: ""
	I0829 19:38:15.577279   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.577286   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:15.577292   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:15.577352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:15.612073   79869 cri.go:89] found id: ""
	I0829 19:38:15.612107   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.612119   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:15.612128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:15.612196   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:15.645565   79869 cri.go:89] found id: ""
	I0829 19:38:15.645587   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.645595   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:15.645601   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:15.645646   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:15.679991   79869 cri.go:89] found id: ""
	I0829 19:38:15.680018   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.680027   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:15.680033   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:15.680109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:15.713899   79869 cri.go:89] found id: ""
	I0829 19:38:15.713923   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.713931   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:15.713937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:15.713991   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:15.750559   79869 cri.go:89] found id: ""
	I0829 19:38:15.750590   79869 logs.go:276] 0 containers: []
	W0829 19:38:15.750601   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:15.750613   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:15.750628   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:15.762918   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:15.762943   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:15.832171   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:15.832195   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:15.832211   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:15.913268   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:15.913311   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:15.951909   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:15.951935   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:18.501587   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:18.514136   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:18.514198   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:18.546937   79869 cri.go:89] found id: ""
	I0829 19:38:18.546977   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.546986   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:18.546994   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:18.547059   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:18.579227   79869 cri.go:89] found id: ""
	I0829 19:38:18.579256   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.579267   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:18.579275   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:18.579350   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:18.610639   79869 cri.go:89] found id: ""
	I0829 19:38:18.610665   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.610673   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:18.610678   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:18.610739   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:18.642646   79869 cri.go:89] found id: ""
	I0829 19:38:18.642672   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.642680   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:18.642689   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:18.642744   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:18.678244   79869 cri.go:89] found id: ""
	I0829 19:38:18.678264   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.678271   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:18.678277   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:18.678341   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:18.709787   79869 cri.go:89] found id: ""
	I0829 19:38:18.709812   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.709820   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:18.709826   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:18.709876   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:14.669989   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:17.169402   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:17.736005   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:20.236887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:19.430240   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:21.930946   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:18.743570   79869 cri.go:89] found id: ""
	I0829 19:38:18.743593   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.743602   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:18.743610   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:18.743671   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:18.776790   79869 cri.go:89] found id: ""
	I0829 19:38:18.776815   79869 logs.go:276] 0 containers: []
	W0829 19:38:18.776823   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:18.776831   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:18.776842   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:18.791736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:18.791765   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:18.880815   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:18.880835   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:18.880849   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:18.969263   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:18.969304   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:19.005813   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:19.005843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.559810   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:21.572617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:21.572682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:21.606221   79869 cri.go:89] found id: ""
	I0829 19:38:21.606245   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.606253   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:21.606259   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:21.606310   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:21.637794   79869 cri.go:89] found id: ""
	I0829 19:38:21.637822   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.637830   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:21.637835   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:21.637888   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:21.671484   79869 cri.go:89] found id: ""
	I0829 19:38:21.671505   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.671515   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:21.671521   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:21.671576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:21.707212   79869 cri.go:89] found id: ""
	I0829 19:38:21.707240   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.707250   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:21.707257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:21.707320   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:21.742944   79869 cri.go:89] found id: ""
	I0829 19:38:21.742964   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.742971   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:21.742977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:21.743023   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:21.779919   79869 cri.go:89] found id: ""
	I0829 19:38:21.779940   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.779947   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:21.779952   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:21.780007   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:21.819817   79869 cri.go:89] found id: ""
	I0829 19:38:21.819848   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.819858   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:21.819866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:21.819926   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:21.853791   79869 cri.go:89] found id: ""
	I0829 19:38:21.853817   79869 logs.go:276] 0 containers: []
	W0829 19:38:21.853825   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:21.853833   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:21.853843   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:21.890519   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:21.890550   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:21.943940   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:21.943972   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:21.956697   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:21.956724   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:22.030470   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:22.030495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:22.030513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:19.170077   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:21.670142   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:23.672076   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:22.237387   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:24.737069   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:23.934621   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:26.430632   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:24.608719   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:24.624343   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:24.624403   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:24.679480   79869 cri.go:89] found id: ""
	I0829 19:38:24.679507   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.679514   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:24.679520   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:24.679589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:24.714065   79869 cri.go:89] found id: ""
	I0829 19:38:24.714114   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.714127   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:24.714134   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:24.714194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:24.751382   79869 cri.go:89] found id: ""
	I0829 19:38:24.751408   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.751417   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:24.751422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:24.751481   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:24.783549   79869 cri.go:89] found id: ""
	I0829 19:38:24.783573   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.783580   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:24.783588   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:24.783643   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:24.815500   79869 cri.go:89] found id: ""
	I0829 19:38:24.815524   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.815532   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:24.815539   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:24.815594   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:24.848225   79869 cri.go:89] found id: ""
	I0829 19:38:24.848249   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.848258   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:24.848264   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:24.848321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:24.880473   79869 cri.go:89] found id: ""
	I0829 19:38:24.880500   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.880511   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:24.880520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:24.880587   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:24.912907   79869 cri.go:89] found id: ""
	I0829 19:38:24.912941   79869 logs.go:276] 0 containers: []
	W0829 19:38:24.912959   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:24.912967   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:24.912996   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:24.985389   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:24.985420   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:24.985437   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:25.060555   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:25.060591   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:25.099073   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:25.099099   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:25.149434   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:25.149473   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:27.664027   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:27.677971   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:27.678042   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:27.715124   79869 cri.go:89] found id: ""
	I0829 19:38:27.715166   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.715179   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:27.715188   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:27.715255   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:27.748316   79869 cri.go:89] found id: ""
	I0829 19:38:27.748348   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.748361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:27.748370   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:27.748439   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:27.782075   79869 cri.go:89] found id: ""
	I0829 19:38:27.782116   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.782128   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:27.782137   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:27.782194   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:27.821517   79869 cri.go:89] found id: ""
	I0829 19:38:27.821545   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.821554   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:27.821562   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:27.821621   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:27.853619   79869 cri.go:89] found id: ""
	I0829 19:38:27.853643   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.853654   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:27.853668   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:27.853723   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:27.886790   79869 cri.go:89] found id: ""
	I0829 19:38:27.886814   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.886822   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:27.886828   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:27.886883   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:27.920756   79869 cri.go:89] found id: ""
	I0829 19:38:27.920779   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.920789   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:27.920802   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:27.920861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:27.959241   79869 cri.go:89] found id: ""
	I0829 19:38:27.959267   79869 logs.go:276] 0 containers: []
	W0829 19:38:27.959279   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:27.959289   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:27.959302   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:27.999922   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:27.999945   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:28.050616   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:28.050655   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:28.066437   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:28.066470   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:28.137427   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:28.137451   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:28.137466   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:26.168927   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:28.169453   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:27.235855   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:29.236537   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:28.929913   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:30.930403   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:32.931280   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:30.721890   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:30.736387   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:30.736462   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:30.773230   79869 cri.go:89] found id: ""
	I0829 19:38:30.773290   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.773304   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:30.773315   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:30.773382   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:30.806234   79869 cri.go:89] found id: ""
	I0829 19:38:30.806261   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.806271   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:30.806279   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:30.806344   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:30.841608   79869 cri.go:89] found id: ""
	I0829 19:38:30.841650   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.841674   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:30.841684   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:30.841751   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:30.875926   79869 cri.go:89] found id: ""
	I0829 19:38:30.875952   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.875960   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:30.875966   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:30.876020   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:30.914312   79869 cri.go:89] found id: ""
	I0829 19:38:30.914334   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.914341   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:30.914347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:30.914406   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:30.948819   79869 cri.go:89] found id: ""
	I0829 19:38:30.948854   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.948865   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:30.948876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:30.948937   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:30.980573   79869 cri.go:89] found id: ""
	I0829 19:38:30.980606   79869 logs.go:276] 0 containers: []
	W0829 19:38:30.980617   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:30.980627   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:30.980688   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:31.012024   79869 cri.go:89] found id: ""
	I0829 19:38:31.012052   79869 logs.go:276] 0 containers: []
	W0829 19:38:31.012061   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:31.012071   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:31.012089   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:31.076870   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:31.076896   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:31.076907   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:31.156257   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:31.156293   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:31.192883   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:31.192911   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:31.246303   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:31.246342   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:30.169738   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:32.669256   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:31.736303   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:34.235284   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:35.430450   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:37.931562   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:33.760372   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:33.773924   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:33.773998   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:33.810019   79869 cri.go:89] found id: ""
	I0829 19:38:33.810047   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.810057   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:33.810064   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:33.810146   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:33.848706   79869 cri.go:89] found id: ""
	I0829 19:38:33.848735   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.848747   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:33.848754   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:33.848822   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:33.880689   79869 cri.go:89] found id: ""
	I0829 19:38:33.880718   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.880731   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:33.880739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:33.880803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:33.911962   79869 cri.go:89] found id: ""
	I0829 19:38:33.911990   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.912000   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:33.912008   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:33.912071   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:33.948432   79869 cri.go:89] found id: ""
	I0829 19:38:33.948457   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.948468   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:33.948474   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:33.948534   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:33.981818   79869 cri.go:89] found id: ""
	I0829 19:38:33.981851   79869 logs.go:276] 0 containers: []
	W0829 19:38:33.981859   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:33.981866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:33.981923   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:34.022072   79869 cri.go:89] found id: ""
	I0829 19:38:34.022108   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.022118   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:34.022125   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:34.022185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:34.055881   79869 cri.go:89] found id: ""
	I0829 19:38:34.055909   79869 logs.go:276] 0 containers: []
	W0829 19:38:34.055920   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:34.055930   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:34.055944   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:34.133046   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:34.133079   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:34.175426   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:34.175457   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:34.228789   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:34.228825   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:34.243272   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:34.243322   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:34.318761   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:36.819665   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:36.832516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:36.832604   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:36.866781   79869 cri.go:89] found id: ""
	I0829 19:38:36.866815   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.866826   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:36.866833   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:36.866895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:36.903289   79869 cri.go:89] found id: ""
	I0829 19:38:36.903319   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.903329   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:36.903335   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:36.903383   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:36.936691   79869 cri.go:89] found id: ""
	I0829 19:38:36.936714   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.936722   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:36.936727   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:36.936776   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:36.969496   79869 cri.go:89] found id: ""
	I0829 19:38:36.969525   79869 logs.go:276] 0 containers: []
	W0829 19:38:36.969535   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:36.969541   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:36.969589   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:37.001683   79869 cri.go:89] found id: ""
	I0829 19:38:37.001707   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.001715   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:37.001720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:37.001765   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:37.041189   79869 cri.go:89] found id: ""
	I0829 19:38:37.041212   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.041223   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:37.041231   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:37.041281   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:37.077041   79869 cri.go:89] found id: ""
	I0829 19:38:37.077067   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.077075   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:37.077080   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:37.077135   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:37.110478   79869 cri.go:89] found id: ""
	I0829 19:38:37.110506   79869 logs.go:276] 0 containers: []
	W0829 19:38:37.110514   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:37.110523   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:37.110535   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:37.162560   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:37.162598   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:37.176466   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:37.176491   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:37.244843   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:37.244861   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:37.244874   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:37.323324   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:37.323362   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:35.169023   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:37.668411   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:36.236332   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:38.236971   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:40.237468   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:39.932147   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.430752   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:39.864755   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:39.877730   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:39.877789   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:39.909828   79869 cri.go:89] found id: ""
	I0829 19:38:39.909864   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.909874   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:39.909880   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:39.909941   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:39.943492   79869 cri.go:89] found id: ""
	I0829 19:38:39.943513   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.943521   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:39.943528   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:39.943586   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:39.976346   79869 cri.go:89] found id: ""
	I0829 19:38:39.976382   79869 logs.go:276] 0 containers: []
	W0829 19:38:39.976393   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:39.976401   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:39.976455   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:40.008764   79869 cri.go:89] found id: ""
	I0829 19:38:40.008793   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.008803   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:40.008810   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:40.008871   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:40.040324   79869 cri.go:89] found id: ""
	I0829 19:38:40.040356   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.040373   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:40.040381   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:40.040448   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:40.072836   79869 cri.go:89] found id: ""
	I0829 19:38:40.072867   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.072880   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:40.072888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:40.072938   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:40.105437   79869 cri.go:89] found id: ""
	I0829 19:38:40.105462   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.105470   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:40.105476   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:40.105520   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:40.139447   79869 cri.go:89] found id: ""
	I0829 19:38:40.139480   79869 logs.go:276] 0 containers: []
	W0829 19:38:40.139491   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:40.139502   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:40.139517   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.177799   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:40.177828   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:40.227087   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:40.227118   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:40.241116   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:40.241139   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:40.305556   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:40.305576   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:40.305590   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:42.886493   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:42.900941   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:42.901013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:42.938904   79869 cri.go:89] found id: ""
	I0829 19:38:42.938925   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.938933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:42.938946   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:42.939012   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:42.975186   79869 cri.go:89] found id: ""
	I0829 19:38:42.975213   79869 logs.go:276] 0 containers: []
	W0829 19:38:42.975221   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:42.975227   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:42.975288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:43.009115   79869 cri.go:89] found id: ""
	I0829 19:38:43.009144   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.009152   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:43.009157   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:43.009207   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:43.044948   79869 cri.go:89] found id: ""
	I0829 19:38:43.044977   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.044987   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:43.044995   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:43.045057   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:43.079699   79869 cri.go:89] found id: ""
	I0829 19:38:43.079725   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.079732   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:43.079739   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:43.079804   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:43.113742   79869 cri.go:89] found id: ""
	I0829 19:38:43.113770   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.113780   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:43.113788   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:43.113850   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:43.151852   79869 cri.go:89] found id: ""
	I0829 19:38:43.151876   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.151884   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:43.151889   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:43.151939   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:43.190832   79869 cri.go:89] found id: ""
	I0829 19:38:43.190854   79869 logs.go:276] 0 containers: []
	W0829 19:38:43.190862   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:43.190869   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:43.190882   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:43.242651   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:43.242683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:43.256378   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:43.256403   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:43.333657   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:43.333684   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:43.333696   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:43.409811   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:43.409850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:40.170246   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.669492   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:42.737831   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:45.236831   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:44.930652   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:46.930941   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:45.947709   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:45.960937   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:45.961013   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:45.993198   79869 cri.go:89] found id: ""
	I0829 19:38:45.993230   79869 logs.go:276] 0 containers: []
	W0829 19:38:45.993242   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:45.993249   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:45.993303   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:46.031110   79869 cri.go:89] found id: ""
	I0829 19:38:46.031137   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.031148   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:46.031157   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:46.031212   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:46.065062   79869 cri.go:89] found id: ""
	I0829 19:38:46.065085   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.065093   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:46.065099   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:46.065155   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:46.099092   79869 cri.go:89] found id: ""
	I0829 19:38:46.099114   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.099122   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:46.099128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:46.099177   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:46.132426   79869 cri.go:89] found id: ""
	I0829 19:38:46.132450   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.132459   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:46.132464   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:46.132517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:46.165289   79869 cri.go:89] found id: ""
	I0829 19:38:46.165320   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.165337   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:46.165346   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:46.165415   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:46.198761   79869 cri.go:89] found id: ""
	I0829 19:38:46.198786   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.198793   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:46.198799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:46.198859   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:46.230621   79869 cri.go:89] found id: ""
	I0829 19:38:46.230649   79869 logs.go:276] 0 containers: []
	W0829 19:38:46.230659   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:46.230669   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:46.230683   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:46.280364   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:46.280398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:46.292854   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:46.292878   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:46.358673   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:46.358694   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:46.358705   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:46.439653   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:46.439688   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:44.669939   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:47.168670   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:47.735386   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:49.736163   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:49.431741   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:51.931271   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:48.975568   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:48.988793   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:48.988857   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:49.023697   79869 cri.go:89] found id: ""
	I0829 19:38:49.023721   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.023730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:49.023736   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:49.023791   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:49.060131   79869 cri.go:89] found id: ""
	I0829 19:38:49.060153   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.060160   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:49.060166   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:49.060222   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:49.096069   79869 cri.go:89] found id: ""
	I0829 19:38:49.096101   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.096112   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:49.096119   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:49.096185   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:49.130316   79869 cri.go:89] found id: ""
	I0829 19:38:49.130347   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.130359   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:49.130367   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:49.130434   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:49.162853   79869 cri.go:89] found id: ""
	I0829 19:38:49.162877   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.162890   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:49.162896   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:49.162956   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:49.198555   79869 cri.go:89] found id: ""
	I0829 19:38:49.198581   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.198592   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:49.198598   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:49.198663   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:49.232521   79869 cri.go:89] found id: ""
	I0829 19:38:49.232550   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.232560   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:49.232568   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:49.232626   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:49.268094   79869 cri.go:89] found id: ""
	I0829 19:38:49.268124   79869 logs.go:276] 0 containers: []
	W0829 19:38:49.268134   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:49.268145   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:49.268161   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:49.320884   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:49.320918   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:49.334244   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:49.334273   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:49.404442   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.404464   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:49.404479   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:49.482413   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:49.482451   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.021406   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:52.035517   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:52.035600   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:52.068868   79869 cri.go:89] found id: ""
	I0829 19:38:52.068902   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.068909   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:52.068915   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:52.068971   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:52.100503   79869 cri.go:89] found id: ""
	I0829 19:38:52.100533   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.100542   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:52.100548   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:52.100620   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:52.135148   79869 cri.go:89] found id: ""
	I0829 19:38:52.135189   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.135201   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:52.135208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:52.135276   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:52.174469   79869 cri.go:89] found id: ""
	I0829 19:38:52.174498   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.174508   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:52.174516   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:52.174576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:52.206485   79869 cri.go:89] found id: ""
	I0829 19:38:52.206508   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.206515   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:52.206520   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:52.206568   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:52.240053   79869 cri.go:89] found id: ""
	I0829 19:38:52.240073   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.240080   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:52.240085   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:52.240143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:52.274473   79869 cri.go:89] found id: ""
	I0829 19:38:52.274497   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.274506   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:52.274513   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:52.274576   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:52.306646   79869 cri.go:89] found id: ""
	I0829 19:38:52.306669   79869 logs.go:276] 0 containers: []
	W0829 19:38:52.306678   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:52.306686   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:52.306698   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:52.383558   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:52.383615   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:52.421958   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:52.421988   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:52.478024   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:52.478059   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:52.490736   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:52.490772   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:52.555670   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:49.169856   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:51.669655   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:52.236654   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:54.735292   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:53.931350   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.430287   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:58.432418   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:55.056273   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:55.068074   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:55.068147   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:55.102268   79869 cri.go:89] found id: ""
	I0829 19:38:55.102298   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.102309   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:55.102317   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:55.102368   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:55.133730   79869 cri.go:89] found id: ""
	I0829 19:38:55.133763   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.133773   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:55.133784   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:55.133848   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:55.168902   79869 cri.go:89] found id: ""
	I0829 19:38:55.168932   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.168942   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:55.168949   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:55.169015   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:55.206190   79869 cri.go:89] found id: ""
	I0829 19:38:55.206220   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.206231   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:55.206241   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:55.206326   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:55.240178   79869 cri.go:89] found id: ""
	I0829 19:38:55.240213   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.240224   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:55.240233   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:55.240313   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:55.272532   79869 cri.go:89] found id: ""
	I0829 19:38:55.272559   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.272569   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:55.272575   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:55.272636   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:55.305427   79869 cri.go:89] found id: ""
	I0829 19:38:55.305457   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.305467   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:55.305473   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:55.305522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:55.337444   79869 cri.go:89] found id: ""
	I0829 19:38:55.337477   79869 logs.go:276] 0 containers: []
	W0829 19:38:55.337489   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:55.337502   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:55.337518   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:55.402988   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:55.403019   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:55.403034   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:55.479168   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:55.479202   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:55.516345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:55.516372   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:55.566716   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:55.566749   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.080261   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:38:58.093884   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:38:58.093944   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:38:58.126772   79869 cri.go:89] found id: ""
	I0829 19:38:58.126799   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.126808   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:38:58.126814   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:38:58.126861   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:38:58.158344   79869 cri.go:89] found id: ""
	I0829 19:38:58.158373   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.158385   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:38:58.158393   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:38:58.158458   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:38:58.191524   79869 cri.go:89] found id: ""
	I0829 19:38:58.191550   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.191561   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:38:58.191569   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:38:58.191635   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:38:58.223336   79869 cri.go:89] found id: ""
	I0829 19:38:58.223362   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.223370   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:38:58.223375   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:38:58.223433   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:38:58.256223   79869 cri.go:89] found id: ""
	I0829 19:38:58.256248   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.256256   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:38:58.256262   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:38:58.256321   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:38:58.290008   79869 cri.go:89] found id: ""
	I0829 19:38:58.290035   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.290044   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:38:58.290049   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:38:58.290112   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:38:58.324441   79869 cri.go:89] found id: ""
	I0829 19:38:58.324471   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.324488   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:38:58.324495   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:38:58.324554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:38:58.357324   79869 cri.go:89] found id: ""
	I0829 19:38:58.357351   79869 logs.go:276] 0 containers: []
	W0829 19:38:58.357361   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:38:58.357378   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:38:58.357394   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:38:58.370251   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:38:58.370277   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:38:58.461098   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:38:58.461123   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:38:58.461138   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:38:58.537222   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:38:58.537255   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:38:58.574012   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:38:58.574043   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:38:54.170237   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.668188   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:58.668309   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:56.736467   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:38:59.236483   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:00.930424   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:02.931161   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:01.125646   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:01.138389   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:01.138464   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:01.172278   79869 cri.go:89] found id: ""
	I0829 19:39:01.172305   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.172313   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:01.172319   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:01.172375   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:01.207408   79869 cri.go:89] found id: ""
	I0829 19:39:01.207444   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.207455   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:01.207462   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:01.207522   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:01.242683   79869 cri.go:89] found id: ""
	I0829 19:39:01.242711   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.242721   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:01.242729   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:01.242788   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:01.275683   79869 cri.go:89] found id: ""
	I0829 19:39:01.275714   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.275730   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:01.275738   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:01.275803   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:01.308039   79869 cri.go:89] found id: ""
	I0829 19:39:01.308063   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.308071   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:01.308078   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:01.308137   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:01.344382   79869 cri.go:89] found id: ""
	I0829 19:39:01.344406   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.344413   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:01.344418   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:01.344466   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:01.379942   79869 cri.go:89] found id: ""
	I0829 19:39:01.379964   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.379972   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:01.379977   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:01.380021   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:01.414955   79869 cri.go:89] found id: ""
	I0829 19:39:01.414981   79869 logs.go:276] 0 containers: []
	W0829 19:39:01.414989   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:01.414997   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:01.415008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:01.469174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:01.469206   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:01.482719   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:01.482743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:01.546713   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:01.546731   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:01.546742   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:01.630655   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:01.630689   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:00.668839   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:02.670762   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:01.236788   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:03.237406   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:05.430398   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.431044   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:04.167940   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:04.180881   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:04.180948   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:04.214782   79869 cri.go:89] found id: ""
	I0829 19:39:04.214809   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.214818   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:04.214824   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:04.214878   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:04.248274   79869 cri.go:89] found id: ""
	I0829 19:39:04.248300   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.248309   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:04.248316   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:04.248378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:04.280622   79869 cri.go:89] found id: ""
	I0829 19:39:04.280648   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.280657   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:04.280681   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:04.280749   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:04.313715   79869 cri.go:89] found id: ""
	I0829 19:39:04.313746   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.313754   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:04.313759   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:04.313806   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:04.345179   79869 cri.go:89] found id: ""
	I0829 19:39:04.345201   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.345209   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:04.345214   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:04.345264   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:04.377264   79869 cri.go:89] found id: ""
	I0829 19:39:04.377294   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.377304   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:04.377315   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:04.377378   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:04.410005   79869 cri.go:89] found id: ""
	I0829 19:39:04.410028   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.410034   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:04.410039   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:04.410109   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:04.444345   79869 cri.go:89] found id: ""
	I0829 19:39:04.444373   79869 logs.go:276] 0 containers: []
	W0829 19:39:04.444383   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:04.444393   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:04.444409   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:04.488071   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:04.488103   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:04.539394   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:04.539427   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:04.552285   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:04.552320   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:04.620973   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:04.620992   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:04.621006   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.201149   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:07.213392   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:07.213452   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:07.249778   79869 cri.go:89] found id: ""
	I0829 19:39:07.249801   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.249812   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:07.249817   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:07.249864   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:07.282763   79869 cri.go:89] found id: ""
	I0829 19:39:07.282792   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.282799   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:07.282805   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:07.282852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:07.316882   79869 cri.go:89] found id: ""
	I0829 19:39:07.316920   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.316932   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:07.316940   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:07.316990   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:07.348474   79869 cri.go:89] found id: ""
	I0829 19:39:07.348505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.348516   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:07.348532   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:07.348606   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:07.381442   79869 cri.go:89] found id: ""
	I0829 19:39:07.381467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.381474   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:07.381479   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:07.381535   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:07.414935   79869 cri.go:89] found id: ""
	I0829 19:39:07.414968   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.414981   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:07.414990   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:07.415053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:07.448427   79869 cri.go:89] found id: ""
	I0829 19:39:07.448467   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.448479   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:07.448486   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:07.448544   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:07.480475   79869 cri.go:89] found id: ""
	I0829 19:39:07.480505   79869 logs.go:276] 0 containers: []
	W0829 19:39:07.480515   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:07.480526   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:07.480540   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:07.532732   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:07.532766   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:07.546366   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:07.546411   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:07.615661   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:07.615679   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:07.615690   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:07.696874   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:07.696909   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:05.169920   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.170223   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:05.735375   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:07.737017   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:10.235794   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:09.930945   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:11.931285   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:10.236118   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:10.249347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:10.249413   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:10.280412   79869 cri.go:89] found id: ""
	I0829 19:39:10.280436   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.280446   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:10.280451   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:10.280499   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:10.313091   79869 cri.go:89] found id: ""
	I0829 19:39:10.313119   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.313126   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:10.313132   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:10.313187   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:10.347208   79869 cri.go:89] found id: ""
	I0829 19:39:10.347243   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.347252   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:10.347257   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:10.347306   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:10.380658   79869 cri.go:89] found id: ""
	I0829 19:39:10.380686   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.380696   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:10.380703   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:10.380750   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:10.412573   79869 cri.go:89] found id: ""
	I0829 19:39:10.412601   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.412613   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:10.412621   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:10.412682   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:10.449655   79869 cri.go:89] found id: ""
	I0829 19:39:10.449683   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.449691   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:10.449698   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:10.449759   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:10.485157   79869 cri.go:89] found id: ""
	I0829 19:39:10.485184   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.485195   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:10.485203   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:10.485262   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:10.522628   79869 cri.go:89] found id: ""
	I0829 19:39:10.522656   79869 logs.go:276] 0 containers: []
	W0829 19:39:10.522666   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:10.522673   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:10.522684   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:10.541079   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:10.541114   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:10.633462   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:10.633495   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:10.633512   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:10.714315   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:10.714354   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:10.751345   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:10.751371   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.306786   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:13.319368   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:13.319447   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:13.353999   79869 cri.go:89] found id: ""
	I0829 19:39:13.354029   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.354039   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:13.354047   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:13.354124   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:13.386953   79869 cri.go:89] found id: ""
	I0829 19:39:13.386982   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.386992   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:13.387000   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:13.387053   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:13.425835   79869 cri.go:89] found id: ""
	I0829 19:39:13.425860   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.425869   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:13.425876   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:13.425942   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:13.462808   79869 cri.go:89] found id: ""
	I0829 19:39:13.462835   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.462843   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:13.462849   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:13.462895   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:13.495194   79869 cri.go:89] found id: ""
	I0829 19:39:13.495228   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.495240   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:13.495248   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:13.495309   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:13.527239   79869 cri.go:89] found id: ""
	I0829 19:39:13.527268   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.527277   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:13.527283   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:13.527357   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:13.559081   79869 cri.go:89] found id: ""
	I0829 19:39:13.559110   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.559121   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:13.559128   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:13.559191   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:13.590723   79869 cri.go:89] found id: ""
	I0829 19:39:13.590748   79869 logs.go:276] 0 containers: []
	W0829 19:39:13.590757   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:13.590767   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:13.590781   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:13.645718   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:13.645751   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:13.659224   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:13.659250   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:13.733532   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:13.733566   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:13.733580   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:09.669065   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:12.169167   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:12.236756   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:14.237536   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:14.431203   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:16.930983   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:13.813639   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:13.813680   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.355269   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:16.377328   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:16.377395   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:16.437904   79869 cri.go:89] found id: ""
	I0829 19:39:16.437926   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.437933   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:16.437939   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:16.437987   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:16.470254   79869 cri.go:89] found id: ""
	I0829 19:39:16.470279   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.470287   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:16.470293   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:16.470353   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:16.502125   79869 cri.go:89] found id: ""
	I0829 19:39:16.502165   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.502177   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:16.502186   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:16.502242   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:16.539754   79869 cri.go:89] found id: ""
	I0829 19:39:16.539781   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.539791   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:16.539799   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:16.539862   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:16.576191   79869 cri.go:89] found id: ""
	I0829 19:39:16.576218   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.576229   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:16.576236   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:16.576292   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:16.610183   79869 cri.go:89] found id: ""
	I0829 19:39:16.610208   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.610219   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:16.610226   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:16.610285   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:16.642568   79869 cri.go:89] found id: ""
	I0829 19:39:16.642605   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.642614   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:16.642624   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:16.642689   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:16.675990   79869 cri.go:89] found id: ""
	I0829 19:39:16.676017   79869 logs.go:276] 0 containers: []
	W0829 19:39:16.676025   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:16.676033   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:16.676049   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:16.739204   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:16.739222   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:16.739233   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:16.816427   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:16.816460   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:16.851816   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:16.851850   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:16.903922   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:16.903958   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:14.169307   79073 pod_ready.go:103] pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:16.163640   79073 pod_ready.go:82] duration metric: took 4m0.000694226s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" ...
	E0829 19:39:16.163683   79073 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-xs5gp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:39:16.163706   79073 pod_ready.go:39] duration metric: took 4m12.036045825s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:16.163738   79073 kubeadm.go:597] duration metric: took 4m20.35086556s to restartPrimaryControlPlane
	W0829 19:39:16.163795   79073 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:16.163827   79073 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:16.736978   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.236047   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.431674   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:21.930447   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:19.418163   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:19.432617   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:19.432676   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:19.464691   79869 cri.go:89] found id: ""
	I0829 19:39:19.464718   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.464730   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:19.464737   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:19.464793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:19.496265   79869 cri.go:89] found id: ""
	I0829 19:39:19.496291   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.496302   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:19.496310   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:19.496397   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:19.527395   79869 cri.go:89] found id: ""
	I0829 19:39:19.527422   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.527433   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:19.527440   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:19.527501   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:19.558377   79869 cri.go:89] found id: ""
	I0829 19:39:19.558404   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.558414   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:19.558422   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:19.558484   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:19.589687   79869 cri.go:89] found id: ""
	I0829 19:39:19.589710   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.589718   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:19.589724   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:19.589813   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:19.624051   79869 cri.go:89] found id: ""
	I0829 19:39:19.624077   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.624086   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:19.624097   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:19.624143   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:19.656248   79869 cri.go:89] found id: ""
	I0829 19:39:19.656282   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.656293   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:19.656301   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:19.656364   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:19.689299   79869 cri.go:89] found id: ""
	I0829 19:39:19.689328   79869 logs.go:276] 0 containers: []
	W0829 19:39:19.689338   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:19.689349   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:19.689365   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:19.739952   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:19.739982   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:19.753169   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:19.753197   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:19.816948   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:19.816971   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:19.816983   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:19.892233   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:19.892270   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.432456   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:22.444842   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:22.444915   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:22.475864   79869 cri.go:89] found id: ""
	I0829 19:39:22.475888   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.475899   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:22.475907   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:22.475954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:22.506824   79869 cri.go:89] found id: ""
	I0829 19:39:22.506851   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.506858   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:22.506864   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:22.506909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:22.544960   79869 cri.go:89] found id: ""
	I0829 19:39:22.544984   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.545002   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:22.545009   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:22.545074   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:22.584077   79869 cri.go:89] found id: ""
	I0829 19:39:22.584098   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.584106   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:22.584114   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:22.584169   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:22.621180   79869 cri.go:89] found id: ""
	I0829 19:39:22.621208   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.621220   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:22.621228   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:22.621288   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:22.658111   79869 cri.go:89] found id: ""
	I0829 19:39:22.658139   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.658151   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:22.658158   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:22.658220   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:22.695654   79869 cri.go:89] found id: ""
	I0829 19:39:22.695679   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.695686   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:22.695692   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:22.695742   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:22.733092   79869 cri.go:89] found id: ""
	I0829 19:39:22.733169   79869 logs.go:276] 0 containers: []
	W0829 19:39:22.733184   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:22.733196   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:22.733212   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:22.808449   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:22.808469   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:22.808485   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:22.889239   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:22.889275   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:22.933487   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:22.933513   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:22.983137   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:22.983178   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:21.236189   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:23.236347   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:25.237213   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:23.932634   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:26.431145   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:28.431496   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:25.496668   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:25.509508   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:25.509572   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:25.544292   79869 cri.go:89] found id: ""
	I0829 19:39:25.544321   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.544334   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:25.544341   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:25.544400   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:25.576739   79869 cri.go:89] found id: ""
	I0829 19:39:25.576768   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.576779   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:25.576787   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:25.576840   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:25.608040   79869 cri.go:89] found id: ""
	I0829 19:39:25.608067   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.608075   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:25.608081   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:25.608127   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:25.639675   79869 cri.go:89] found id: ""
	I0829 19:39:25.639703   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.639712   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:25.639720   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:25.639785   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:25.676966   79869 cri.go:89] found id: ""
	I0829 19:39:25.676995   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.677007   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:25.677014   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:25.677075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:25.712310   79869 cri.go:89] found id: ""
	I0829 19:39:25.712334   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.712341   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:25.712347   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:25.712393   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:25.746172   79869 cri.go:89] found id: ""
	I0829 19:39:25.746196   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.746203   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:25.746208   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:25.746257   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:25.778476   79869 cri.go:89] found id: ""
	I0829 19:39:25.778497   79869 logs.go:276] 0 containers: []
	W0829 19:39:25.778506   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:25.778514   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:25.778525   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:25.817791   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:25.817820   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:25.874597   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:25.874634   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:25.887469   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:25.887493   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:25.957308   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:25.957329   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:25.957348   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:28.536826   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:28.550981   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:28.551038   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:28.586607   79869 cri.go:89] found id: ""
	I0829 19:39:28.586636   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.586647   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:28.586656   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:28.586716   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:28.627696   79869 cri.go:89] found id: ""
	I0829 19:39:28.627720   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.627728   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:28.627734   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:28.627793   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:28.659877   79869 cri.go:89] found id: ""
	I0829 19:39:28.659906   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.659915   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:28.659920   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:28.659967   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:28.694834   79869 cri.go:89] found id: ""
	I0829 19:39:28.694861   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.694868   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:28.694874   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:28.694934   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:28.728833   79869 cri.go:89] found id: ""
	I0829 19:39:28.728866   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.728878   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:28.728888   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:28.728951   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:27.237871   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:29.735887   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:30.931849   79559 pod_ready.go:103] pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:33.424593   79559 pod_ready.go:82] duration metric: took 4m0.000177098s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" ...
	E0829 19:39:33.424633   79559 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-tbkxg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:39:33.424656   79559 pod_ready.go:39] duration metric: took 4m10.047294609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:33.424687   79559 kubeadm.go:597] duration metric: took 4m17.474785939s to restartPrimaryControlPlane
	W0829 19:39:33.424745   79559 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:33.424773   79559 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:28.762236   79869 cri.go:89] found id: ""
	I0829 19:39:28.762269   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.762279   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:28.762286   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:28.762352   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:28.794534   79869 cri.go:89] found id: ""
	I0829 19:39:28.794570   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.794583   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:28.794590   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:28.794660   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:28.827193   79869 cri.go:89] found id: ""
	I0829 19:39:28.827222   79869 logs.go:276] 0 containers: []
	W0829 19:39:28.827233   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:28.827244   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:28.827260   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:28.878905   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:28.878936   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:28.891795   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:28.891826   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:28.966249   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:28.966278   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:28.966294   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:29.044383   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:29.044417   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.582383   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:31.595250   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:31.595333   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:31.628763   79869 cri.go:89] found id: ""
	I0829 19:39:31.628791   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.628800   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:31.628805   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:31.628852   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:31.663489   79869 cri.go:89] found id: ""
	I0829 19:39:31.663521   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.663531   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:31.663537   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:31.663598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:31.698248   79869 cri.go:89] found id: ""
	I0829 19:39:31.698275   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.698283   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:31.698289   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:31.698340   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:31.732499   79869 cri.go:89] found id: ""
	I0829 19:39:31.732527   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.732536   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:31.732544   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:31.732601   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:31.773831   79869 cri.go:89] found id: ""
	I0829 19:39:31.773853   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.773861   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:31.773866   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:31.773909   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:31.807713   79869 cri.go:89] found id: ""
	I0829 19:39:31.807739   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.807747   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:31.807753   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:31.807814   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:31.841846   79869 cri.go:89] found id: ""
	I0829 19:39:31.841874   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.841881   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:31.841887   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:31.841945   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:31.872713   79869 cri.go:89] found id: ""
	I0829 19:39:31.872736   79869 logs.go:276] 0 containers: []
	W0829 19:39:31.872749   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:31.872760   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:31.872773   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:31.926299   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:31.926335   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:31.941134   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:31.941174   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:32.010600   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:32.010623   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:32.010638   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:32.091972   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:32.092008   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:31.737021   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:34.236447   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:34.631695   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:34.644986   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:34.645051   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:34.679788   79869 cri.go:89] found id: ""
	I0829 19:39:34.679816   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.679823   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:34.679832   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:34.679881   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:34.713113   79869 cri.go:89] found id: ""
	I0829 19:39:34.713139   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.713147   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:34.713152   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:34.713204   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:34.745410   79869 cri.go:89] found id: ""
	I0829 19:39:34.745439   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.745451   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:34.745459   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:34.745517   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:34.779089   79869 cri.go:89] found id: ""
	I0829 19:39:34.779117   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.779125   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:34.779132   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:34.779179   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:34.810966   79869 cri.go:89] found id: ""
	I0829 19:39:34.810995   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.811004   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:34.811011   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:34.811075   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:34.844859   79869 cri.go:89] found id: ""
	I0829 19:39:34.844894   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.844901   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:34.844907   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:34.844954   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:34.876014   79869 cri.go:89] found id: ""
	I0829 19:39:34.876036   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.876044   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:34.876050   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:34.876097   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:34.909383   79869 cri.go:89] found id: ""
	I0829 19:39:34.909412   79869 logs.go:276] 0 containers: []
	W0829 19:39:34.909421   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:34.909429   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:34.909440   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:34.956841   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:34.956875   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:34.969399   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:34.969423   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:35.034539   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:35.034574   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:35.034589   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:35.109702   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:35.109743   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:37.644897   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:37.658600   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:39:37.658665   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:39:37.693604   79869 cri.go:89] found id: ""
	I0829 19:39:37.693638   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.693646   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:39:37.693655   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:39:37.693763   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:39:37.727504   79869 cri.go:89] found id: ""
	I0829 19:39:37.727531   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.727538   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:39:37.727546   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:39:37.727598   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:39:37.762755   79869 cri.go:89] found id: ""
	I0829 19:39:37.762778   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.762786   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:39:37.762792   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:39:37.762838   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:39:37.799571   79869 cri.go:89] found id: ""
	I0829 19:39:37.799600   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.799611   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:39:37.799619   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:39:37.799669   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:39:37.833599   79869 cri.go:89] found id: ""
	I0829 19:39:37.833632   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.833644   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:39:37.833651   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:39:37.833714   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:39:37.867877   79869 cri.go:89] found id: ""
	I0829 19:39:37.867901   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.867909   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:39:37.867916   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:39:37.867968   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:39:37.901439   79869 cri.go:89] found id: ""
	I0829 19:39:37.901467   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.901475   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:39:37.901480   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:39:37.901527   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:39:37.936983   79869 cri.go:89] found id: ""
	I0829 19:39:37.937008   79869 logs.go:276] 0 containers: []
	W0829 19:39:37.937016   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:39:37.937024   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:39:37.937035   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0829 19:39:38.016873   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:39:38.016917   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:39:38.052565   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:39:38.052605   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:39:38.102174   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:39:38.102210   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:39:38.115273   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:39:38.115298   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:39:38.186012   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:39:36.736406   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:39.235941   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:42.401382   79073 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.237529155s)
	I0829 19:39:42.401460   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:42.428754   79073 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:42.441896   79073 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:42.456122   79073 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:42.456147   79073 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:42.456190   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:42.471887   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:42.471947   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:42.483709   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:42.493000   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:42.493070   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:42.511916   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:42.520829   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:42.520891   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:42.530567   79073 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:42.540199   79073 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:42.540252   79073 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:42.550058   79073 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:42.596809   79073 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:39:42.596966   79073 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:42.706623   79073 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:42.706766   79073 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:42.706931   79073 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:39:42.717740   79073 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:40.686558   79869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:39:40.699240   79869 kubeadm.go:597] duration metric: took 4m4.589527641s to restartPrimaryControlPlane
	W0829 19:39:40.699313   79869 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:39:40.699343   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:39:42.719760   79073 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:42.719862   79073 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:42.719929   79073 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:42.720023   79073 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:42.720079   79073 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:42.720144   79073 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:42.720193   79073 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:42.720248   79073 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:42.720315   79073 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:42.720386   79073 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:42.720459   79073 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:42.720496   79073 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:42.720555   79073 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:42.827328   79073 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:43.276222   79073 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:39:43.445594   79073 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:43.554811   79073 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:43.788184   79073 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:43.788791   79073 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:43.791871   79073 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:43.794448   79073 out.go:235]   - Booting up control plane ...
	I0829 19:39:43.794600   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:43.794702   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:43.794800   79073 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:43.813894   79073 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:43.822272   79073 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:43.822357   79073 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:44.450706   79869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.75133723s)
	I0829 19:39:44.450782   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:44.464692   79869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:44.473894   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:44.483464   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:44.483483   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:44.483524   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:39:44.492228   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:44.492277   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:44.501349   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:39:44.510241   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:44.510295   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:44.519210   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.528256   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:44.528314   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:44.537658   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:39:44.546976   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:44.547027   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:44.556823   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:39:44.630397   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:39:44.630474   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:39:44.771729   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:39:44.771869   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:39:44.772018   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:39:44.944512   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:41.236034   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:43.236446   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:45.237605   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:44.947210   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:39:44.947320   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:39:44.947422   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:39:44.947540   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:39:44.947658   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:39:44.947781   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:39:44.947881   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:39:44.950819   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:39:44.950926   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:39:44.951022   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:39:44.951125   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:39:44.951174   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:39:44.951244   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:39:45.171698   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:39:45.287539   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:39:45.443576   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:39:45.594891   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:39:45.609143   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:39:45.610374   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:39:45.610440   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:39:45.746839   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:39:45.748753   79869 out.go:235]   - Booting up control plane ...
	I0829 19:39:45.748882   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:39:45.753577   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:39:45.754588   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:39:45.755463   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:39:45.760295   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:39:43.950283   79073 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:39:43.950458   79073 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:39:44.452956   79073 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.82915ms
	I0829 19:39:44.453068   79073 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:39:49.455000   79073 kubeadm.go:310] [api-check] The API server is healthy after 5.001789194s
	I0829 19:39:49.473145   79073 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:39:49.496760   79073 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:39:49.530950   79073 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:39:49.531148   79073 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-920571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:39:49.548546   79073 kubeadm.go:310] [bootstrap-token] Using token: bc4428.p8e3szrujohqnvnh
	I0829 19:39:47.735610   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:49.735833   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:49.549992   79073 out.go:235]   - Configuring RBAC rules ...
	I0829 19:39:49.550151   79073 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:39:49.558070   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:39:49.573758   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:39:49.579988   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:39:49.585250   79073 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:39:49.592477   79073 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:39:49.863168   79073 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:39:50.294056   79073 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:39:50.862652   79073 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:39:50.863644   79073 kubeadm.go:310] 
	I0829 19:39:50.863717   79073 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:39:50.863729   79073 kubeadm.go:310] 
	I0829 19:39:50.863861   79073 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:39:50.863881   79073 kubeadm.go:310] 
	I0829 19:39:50.863917   79073 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:39:50.864019   79073 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:39:50.864101   79073 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:39:50.864111   79073 kubeadm.go:310] 
	I0829 19:39:50.864215   79073 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:39:50.864225   79073 kubeadm.go:310] 
	I0829 19:39:50.864298   79073 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:39:50.864312   79073 kubeadm.go:310] 
	I0829 19:39:50.864398   79073 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:39:50.864517   79073 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:39:50.864617   79073 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:39:50.864631   79073 kubeadm.go:310] 
	I0829 19:39:50.864743   79073 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:39:50.864856   79073 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:39:50.864869   79073 kubeadm.go:310] 
	I0829 19:39:50.864983   79073 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bc4428.p8e3szrujohqnvnh \
	I0829 19:39:50.865110   79073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:39:50.865142   79073 kubeadm.go:310] 	--control-plane 
	I0829 19:39:50.865152   79073 kubeadm.go:310] 
	I0829 19:39:50.865262   79073 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:39:50.865270   79073 kubeadm.go:310] 
	I0829 19:39:50.865370   79073 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bc4428.p8e3szrujohqnvnh \
	I0829 19:39:50.865527   79073 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:39:50.866485   79073 kubeadm.go:310] W0829 19:39:42.565022    2509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:39:50.866852   79073 kubeadm.go:310] W0829 19:39:42.566073    2509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:39:50.866979   79073 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:39:50.867009   79073 cni.go:84] Creating CNI manager for ""
	I0829 19:39:50.867020   79073 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:39:50.868683   79073 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:39:50.869952   79073 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:39:50.880385   79073 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:39:50.900028   79073 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:39:50.900152   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:50.900187   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-920571 minikube.k8s.io/updated_at=2024_08_29T19_39_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=embed-certs-920571 minikube.k8s.io/primary=true
	I0829 19:39:51.090710   79073 ops.go:34] apiserver oom_adj: -16
	I0829 19:39:51.090865   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:51.591720   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:52.091579   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:52.591872   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:53.091671   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:53.591191   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.091640   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.591356   79073 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:39:54.674005   79073 kubeadm.go:1113] duration metric: took 3.773916232s to wait for elevateKubeSystemPrivileges
	I0829 19:39:54.674046   79073 kubeadm.go:394] duration metric: took 4m58.910639816s to StartCluster
	I0829 19:39:54.674070   79073 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:39:54.674178   79073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:39:54.675789   79073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:39:54.676038   79073 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.243 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:39:54.676095   79073 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:39:54.676184   79073 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-920571"
	I0829 19:39:54.676210   79073 addons.go:69] Setting default-storageclass=true in profile "embed-certs-920571"
	I0829 19:39:54.676222   79073 addons.go:69] Setting metrics-server=true in profile "embed-certs-920571"
	I0829 19:39:54.676225   79073 config.go:182] Loaded profile config "embed-certs-920571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:39:54.676241   79073 addons.go:234] Setting addon metrics-server=true in "embed-certs-920571"
	I0829 19:39:54.676264   79073 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-920571"
	I0829 19:39:54.676216   79073 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-920571"
	W0829 19:39:54.676329   79073 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:39:54.676360   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	W0829 19:39:54.676392   79073 addons.go:243] addon metrics-server should already be in state true
	I0829 19:39:54.676455   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	I0829 19:39:54.676650   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676664   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676682   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.676684   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.676824   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.676859   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.677794   79073 out.go:177] * Verifying Kubernetes components...
	I0829 19:39:54.679112   79073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:39:54.694669   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0829 19:39:54.694717   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0829 19:39:54.695090   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.695420   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.695532   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35861
	I0829 19:39:54.695640   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.695656   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.695925   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.695948   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.695951   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.696038   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.696266   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.696373   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.696392   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.696443   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.696600   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.696629   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.696745   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.697378   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.697413   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.702955   79073 addons.go:234] Setting addon default-storageclass=true in "embed-certs-920571"
	W0829 19:39:54.702978   79073 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:39:54.703003   79073 host.go:66] Checking if "embed-certs-920571" exists ...
	I0829 19:39:54.703347   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.703377   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.714194   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0829 19:39:54.714526   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0829 19:39:54.714735   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.714916   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.715368   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.715369   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.715389   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.715401   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.715712   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.715713   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.715944   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.715943   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.717556   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.717758   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.718972   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39097
	I0829 19:39:54.719212   79073 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:39:54.719303   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.719212   79073 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:39:52.236231   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:54.238843   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:54.719723   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.719735   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.720033   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.720307   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:39:54.720322   79073 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:39:54.720342   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.720533   79073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:39:54.720559   79073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:39:54.720952   79073 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:39:54.720975   79073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:39:54.720992   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.723754   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724174   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.724198   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724516   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.724684   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.724820   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.724879   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.724973   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.725426   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.725466   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.725687   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.725827   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.725982   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.726117   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.743443   79073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37853
	I0829 19:39:54.744025   79073 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:39:54.744590   79073 main.go:141] libmachine: Using API Version  1
	I0829 19:39:54.744618   79073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:39:54.744908   79073 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:39:54.745030   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetState
	I0829 19:39:54.746560   79073 main.go:141] libmachine: (embed-certs-920571) Calling .DriverName
	I0829 19:39:54.746809   79073 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:39:54.746819   79073 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:39:54.746831   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHHostname
	I0829 19:39:54.749422   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.749802   79073 main.go:141] libmachine: (embed-certs-920571) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:28:22", ip: ""} in network mk-embed-certs-920571: {Iface:virbr3 ExpiryTime:2024-08-29 20:34:43 +0000 UTC Type:0 Mac:52:54:00:35:28:22 Iaid: IPaddr:192.168.61.243 Prefix:24 Hostname:embed-certs-920571 Clientid:01:52:54:00:35:28:22}
	I0829 19:39:54.749827   79073 main.go:141] libmachine: (embed-certs-920571) DBG | domain embed-certs-920571 has defined IP address 192.168.61.243 and MAC address 52:54:00:35:28:22 in network mk-embed-certs-920571
	I0829 19:39:54.749904   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHPort
	I0829 19:39:54.750058   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHKeyPath
	I0829 19:39:54.750206   79073 main.go:141] libmachine: (embed-certs-920571) Calling .GetSSHUsername
	I0829 19:39:54.750320   79073 sshutil.go:53] new ssh client: &{IP:192.168.61.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/embed-certs-920571/id_rsa Username:docker}
	I0829 19:39:54.902922   79073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:39:54.921933   79073 node_ready.go:35] waiting up to 6m0s for node "embed-certs-920571" to be "Ready" ...
	I0829 19:39:54.936483   79073 node_ready.go:49] node "embed-certs-920571" has status "Ready":"True"
	I0829 19:39:54.936513   79073 node_ready.go:38] duration metric: took 14.542582ms for node "embed-certs-920571" to be "Ready" ...
	I0829 19:39:54.936524   79073 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:39:54.945389   79073 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:55.076394   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:39:55.076421   79073 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:39:55.089140   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:39:55.096473   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:39:55.128207   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:39:55.128235   79073 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:39:55.186402   79073 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:39:55.186429   79073 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:39:55.262731   79073 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:39:55.548177   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.548217   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.548521   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.548542   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:55.548555   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.548564   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.548824   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.548857   79073 main.go:141] libmachine: (embed-certs-920571) DBG | Closing plugin on server side
	I0829 19:39:55.548872   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:55.555956   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:55.555971   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:55.556210   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:55.556227   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.020289   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.020317   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.020610   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.020632   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.020642   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.020650   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.020912   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.020931   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.369749   79073 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.106975723s)
	I0829 19:39:56.369809   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.369825   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.370119   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.370143   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.370154   79073 main.go:141] libmachine: Making call to close driver server
	I0829 19:39:56.370168   79073 main.go:141] libmachine: (embed-certs-920571) Calling .Close
	I0829 19:39:56.370407   79073 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:39:56.370428   79073 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:39:56.370440   79073 addons.go:475] Verifying addon metrics-server=true in "embed-certs-920571"
	I0829 19:39:56.373030   79073 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 19:39:56.374322   79073 addons.go:510] duration metric: took 1.698226444s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 19:39:56.460329   79073 pod_ready.go:93] pod "etcd-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:39:56.460362   79073 pod_ready.go:82] duration metric: took 1.51494335s for pod "etcd-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:56.460375   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:58.467017   79073 pod_ready.go:93] pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:39:58.467040   79073 pod_ready.go:82] duration metric: took 2.006657264s for pod "kube-apiserver-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:58.467050   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:39:59.826535   79559 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.4017346s)
	I0829 19:39:59.826609   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:39:59.849311   79559 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:39:59.859855   79559 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:39:59.874237   79559 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:39:59.874262   79559 kubeadm.go:157] found existing configuration files:
	
	I0829 19:39:59.874315   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0829 19:39:59.883719   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:39:59.883785   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:39:59.893307   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0829 19:39:59.902478   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:39:59.902519   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:39:59.912664   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0829 19:39:59.932387   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:39:59.932443   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:39:59.948358   79559 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0829 19:39:59.965812   79559 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:39:59.965867   79559 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:39:59.975437   79559 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:40:00.022167   79559 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:40:00.022347   79559 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:40:00.126622   79559 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:40:00.126777   79559 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:40:00.126914   79559 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:40:00.135123   79559 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:39:56.736712   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:39:59.235639   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:00.137714   79559 out.go:235]   - Generating certificates and keys ...
	I0829 19:40:00.137806   79559 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:40:00.137875   79559 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:40:00.138003   79559 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:40:00.138114   79559 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:40:00.138184   79559 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:40:00.138240   79559 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:40:00.138297   79559 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:40:00.138351   79559 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:40:00.138443   79559 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:40:00.138555   79559 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:40:00.138607   79559 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:40:00.138682   79559 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:40:00.368674   79559 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:40:00.454426   79559 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:40:00.576835   79559 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:40:00.650342   79559 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:40:01.038392   79559 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:40:01.038806   79559 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:40:01.041297   79559 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:40:01.043020   79559 out.go:235]   - Booting up control plane ...
	I0829 19:40:01.043127   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:40:01.043224   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:40:01.043501   79559 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:40:01.062342   79559 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:40:01.068185   79559 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:40:01.068247   79559 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:40:01.202906   79559 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:40:01.203076   79559 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:40:01.705241   79559 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.336154ms
	I0829 19:40:01.705368   79559 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:40:00.476336   79073 pod_ready.go:103] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:02.973188   79073 pod_ready.go:103] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:03.473576   79073 pod_ready.go:93] pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.473607   79073 pod_ready.go:82] duration metric: took 5.006550689s for pod "kube-controller-manager-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.473616   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25cmq" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.478026   79073 pod_ready.go:93] pod "kube-proxy-25cmq" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.478045   79073 pod_ready.go:82] duration metric: took 4.423884ms for pod "kube-proxy-25cmq" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.478054   79073 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.482541   79073 pod_ready.go:93] pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:03.482560   79073 pod_ready.go:82] duration metric: took 4.499742ms for pod "kube-scheduler-embed-certs-920571" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:03.482566   79073 pod_ready.go:39] duration metric: took 8.54603076s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:03.482581   79073 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:40:03.482623   79073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:40:03.502670   79073 api_server.go:72] duration metric: took 8.826595134s to wait for apiserver process to appear ...
	I0829 19:40:03.502695   79073 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:40:03.502718   79073 api_server.go:253] Checking apiserver healthz at https://192.168.61.243:8443/healthz ...
	I0829 19:40:03.507953   79073 api_server.go:279] https://192.168.61.243:8443/healthz returned 200:
	ok
	I0829 19:40:03.508948   79073 api_server.go:141] control plane version: v1.31.0
	I0829 19:40:03.508968   79073 api_server.go:131] duration metric: took 6.265433ms to wait for apiserver health ...
	I0829 19:40:03.508977   79073 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:40:03.514929   79073 system_pods.go:59] 9 kube-system pods found
	I0829 19:40:03.514962   79073 system_pods.go:61] "coredns-6f6b679f8f-8qrn6" [af312704-4ea9-432d-85b2-67c59231187f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:03.514971   79073 system_pods.go:61] "coredns-6f6b679f8f-9f75n" [80f3b51d-fced-4cd9-8c43-2a4eea28e470] Running
	I0829 19:40:03.514979   79073 system_pods.go:61] "etcd-embed-certs-920571" [47af52f8-1d18-41fd-b013-e53fe813e4cf] Running
	I0829 19:40:03.514987   79073 system_pods.go:61] "kube-apiserver-embed-certs-920571" [1e9c3d77-55e1-4998-a2af-430254fde431] Running
	I0829 19:40:03.514994   79073 system_pods.go:61] "kube-controller-manager-embed-certs-920571" [08b33c9f-5858-41b7-a190-f34b611203ee] Running
	I0829 19:40:03.515000   79073 system_pods.go:61] "kube-proxy-25cmq" [35ecfe58-b448-4db0-b4cc-434422ec4ca6] Running
	I0829 19:40:03.515009   79073 system_pods.go:61] "kube-scheduler-embed-certs-920571" [ea37ea4f-390d-41d1-83b2-72fe1a09302b] Running
	I0829 19:40:03.515018   79073 system_pods.go:61] "metrics-server-6867b74b74-kb2c6" [8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:03.515027   79073 system_pods.go:61] "storage-provisioner" [741481e5-8e38-4522-a9df-4b36e6d5cf9c] Running
	I0829 19:40:03.515036   79073 system_pods.go:74] duration metric: took 6.052187ms to wait for pod list to return data ...
	I0829 19:40:03.515046   79073 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:40:03.518040   79073 default_sa.go:45] found service account: "default"
	I0829 19:40:03.518060   79073 default_sa.go:55] duration metric: took 3.004653ms for default service account to be created ...
	I0829 19:40:03.518069   79073 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:40:03.523915   79073 system_pods.go:86] 9 kube-system pods found
	I0829 19:40:03.523942   79073 system_pods.go:89] "coredns-6f6b679f8f-8qrn6" [af312704-4ea9-432d-85b2-67c59231187f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:03.523949   79073 system_pods.go:89] "coredns-6f6b679f8f-9f75n" [80f3b51d-fced-4cd9-8c43-2a4eea28e470] Running
	I0829 19:40:03.523954   79073 system_pods.go:89] "etcd-embed-certs-920571" [47af52f8-1d18-41fd-b013-e53fe813e4cf] Running
	I0829 19:40:03.523958   79073 system_pods.go:89] "kube-apiserver-embed-certs-920571" [1e9c3d77-55e1-4998-a2af-430254fde431] Running
	I0829 19:40:03.523962   79073 system_pods.go:89] "kube-controller-manager-embed-certs-920571" [08b33c9f-5858-41b7-a190-f34b611203ee] Running
	I0829 19:40:03.523965   79073 system_pods.go:89] "kube-proxy-25cmq" [35ecfe58-b448-4db0-b4cc-434422ec4ca6] Running
	I0829 19:40:03.523968   79073 system_pods.go:89] "kube-scheduler-embed-certs-920571" [ea37ea4f-390d-41d1-83b2-72fe1a09302b] Running
	I0829 19:40:03.523973   79073 system_pods.go:89] "metrics-server-6867b74b74-kb2c6" [8c0a4c7a-19e1-402e-ab6c-1a909d38c5a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:03.523978   79073 system_pods.go:89] "storage-provisioner" [741481e5-8e38-4522-a9df-4b36e6d5cf9c] Running
	I0829 19:40:03.523986   79073 system_pods.go:126] duration metric: took 5.911567ms to wait for k8s-apps to be running ...
	I0829 19:40:03.523997   79073 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:40:03.524049   79073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:03.541502   79073 system_svc.go:56] duration metric: took 17.4955ms WaitForService to wait for kubelet
	I0829 19:40:03.541538   79073 kubeadm.go:582] duration metric: took 8.865466463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:40:03.541564   79073 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:40:03.544700   79073 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:40:03.544728   79073 node_conditions.go:123] node cpu capacity is 2
	I0829 19:40:03.544744   79073 node_conditions.go:105] duration metric: took 3.172559ms to run NodePressure ...
	I0829 19:40:03.544758   79073 start.go:241] waiting for startup goroutines ...
	I0829 19:40:03.544771   79073 start.go:246] waiting for cluster config update ...
	I0829 19:40:03.544789   79073 start.go:255] writing updated cluster config ...
	I0829 19:40:03.545136   79073 ssh_runner.go:195] Run: rm -f paused
	I0829 19:40:03.609413   79073 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:40:03.611490   79073 out.go:177] * Done! kubectl is now configured to use "embed-certs-920571" cluster and "default" namespace by default
	I0829 19:40:01.236210   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:03.236420   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:05.237141   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:06.707891   79559 kubeadm.go:310] [api-check] The API server is healthy after 5.002523987s
	I0829 19:40:06.719470   79559 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:40:06.733886   79559 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:40:06.759672   79559 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:40:06.759933   79559 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-672127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:40:06.771514   79559 kubeadm.go:310] [bootstrap-token] Using token: fzav4x.eeztheucmrep51py
	I0829 19:40:06.772887   79559 out.go:235]   - Configuring RBAC rules ...
	I0829 19:40:06.773014   79559 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:40:06.778644   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:40:06.792388   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:40:06.798560   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:40:06.801930   79559 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:40:06.805767   79559 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:40:07.119680   79559 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:40:07.536660   79559 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:40:08.115528   79559 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:40:08.115550   79559 kubeadm.go:310] 
	I0829 19:40:08.115621   79559 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:40:08.115657   79559 kubeadm.go:310] 
	I0829 19:40:08.115780   79559 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:40:08.115802   79559 kubeadm.go:310] 
	I0829 19:40:08.115843   79559 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:40:08.115929   79559 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:40:08.116002   79559 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:40:08.116011   79559 kubeadm.go:310] 
	I0829 19:40:08.116087   79559 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:40:08.116099   79559 kubeadm.go:310] 
	I0829 19:40:08.116154   79559 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:40:08.116173   79559 kubeadm.go:310] 
	I0829 19:40:08.116247   79559 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:40:08.116386   79559 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:40:08.116477   79559 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:40:08.116487   79559 kubeadm.go:310] 
	I0829 19:40:08.116599   79559 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:40:08.116705   79559 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:40:08.116712   79559 kubeadm.go:310] 
	I0829 19:40:08.116779   79559 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token fzav4x.eeztheucmrep51py \
	I0829 19:40:08.116879   79559 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:40:08.116931   79559 kubeadm.go:310] 	--control-plane 
	I0829 19:40:08.116947   79559 kubeadm.go:310] 
	I0829 19:40:08.117048   79559 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:40:08.117058   79559 kubeadm.go:310] 
	I0829 19:40:08.117154   79559 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token fzav4x.eeztheucmrep51py \
	I0829 19:40:08.117270   79559 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:40:08.118512   79559 kubeadm.go:310] W0829 19:39:59.991394    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:08.118870   79559 kubeadm.go:310] W0829 19:39:59.992249    2559 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:08.118981   79559 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:40:08.119009   79559 cni.go:84] Creating CNI manager for ""
	I0829 19:40:08.119019   79559 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:40:08.120832   79559 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:40:08.122029   79559 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:40:08.133326   79559 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:40:08.150808   79559 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:40:08.150867   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:08.150884   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-672127 minikube.k8s.io/updated_at=2024_08_29T19_40_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=default-k8s-diff-port-672127 minikube.k8s.io/primary=true
	I0829 19:40:08.170047   79559 ops.go:34] apiserver oom_adj: -16
	I0829 19:40:08.350103   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:07.736119   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:10.236910   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:08.850762   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:09.350244   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:09.850222   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:10.350462   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:10.850237   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:11.350179   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:11.851033   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:12.351069   79559 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:12.442963   79559 kubeadm.go:1113] duration metric: took 4.29215456s to wait for elevateKubeSystemPrivileges
	I0829 19:40:12.442998   79559 kubeadm.go:394] duration metric: took 4m56.544013459s to StartCluster
	I0829 19:40:12.443020   79559 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:40:12.443110   79559 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:40:12.444757   79559 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:40:12.444998   79559 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.70 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:40:12.445061   79559 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:40:12.445138   79559 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445151   79559 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445173   79559 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.445181   79559 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:40:12.445179   79559 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-672127"
	I0829 19:40:12.445210   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.445210   79559 config.go:182] Loaded profile config "default-k8s-diff-port-672127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:40:12.445266   79559 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-672127"
	I0829 19:40:12.445313   79559 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.445323   79559 addons.go:243] addon metrics-server should already be in state true
	I0829 19:40:12.445347   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.445625   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445658   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.445662   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445683   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.445737   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.445775   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.446414   79559 out.go:177] * Verifying Kubernetes components...
	I0829 19:40:12.447652   79559 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:40:12.461386   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0829 19:40:12.461436   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I0829 19:40:12.461805   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.461831   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.462057   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I0829 19:40:12.462324   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462327   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462341   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462347   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462373   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.462701   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.462798   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.462807   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.462817   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.462886   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.463109   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.463360   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.463392   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.463586   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.463607   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.465961   79559 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-672127"
	W0829 19:40:12.465971   79559 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:40:12.465991   79559 host.go:66] Checking if "default-k8s-diff-port-672127" exists ...
	I0829 19:40:12.466309   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.466355   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.480989   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43705
	I0829 19:40:12.481216   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44477
	I0829 19:40:12.481407   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.481639   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.481843   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.481858   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.482222   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.482249   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.482291   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.482440   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.482576   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.482745   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.484681   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.485336   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.485664   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0829 19:40:12.486377   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.486547   79559 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:40:12.486922   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.486945   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.487310   79559 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:40:12.487586   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.488042   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:40:12.488060   79559 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:40:12.488081   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.488266   79559 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:40:12.488307   79559 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:40:12.488874   79559 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:40:12.488897   79559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:40:12.488914   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.492291   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.492699   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.492814   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.492844   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.493059   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.493128   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.493144   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.493259   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.493300   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.493432   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.493471   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.493822   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.493972   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.494114   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.505220   79559 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I0829 19:40:12.505690   79559 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:40:12.506337   79559 main.go:141] libmachine: Using API Version  1
	I0829 19:40:12.506363   79559 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:40:12.506727   79559 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:40:12.506899   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetState
	I0829 19:40:12.508602   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .DriverName
	I0829 19:40:12.508796   79559 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:40:12.508810   79559 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:40:12.508829   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHHostname
	I0829 19:40:12.511310   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.511660   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a8:cf", ip: ""} in network mk-default-k8s-diff-port-672127: {Iface:virbr2 ExpiryTime:2024-08-29 20:35:01 +0000 UTC Type:0 Mac:52:54:00:db:a8:cf Iaid: IPaddr:192.168.50.70 Prefix:24 Hostname:default-k8s-diff-port-672127 Clientid:01:52:54:00:db:a8:cf}
	I0829 19:40:12.511691   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | domain default-k8s-diff-port-672127 has defined IP address 192.168.50.70 and MAC address 52:54:00:db:a8:cf in network mk-default-k8s-diff-port-672127
	I0829 19:40:12.511815   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHPort
	I0829 19:40:12.511969   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHKeyPath
	I0829 19:40:12.512110   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .GetSSHUsername
	I0829 19:40:12.512253   79559 sshutil.go:53] new ssh client: &{IP:192.168.50.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/default-k8s-diff-port-672127/id_rsa Username:docker}
	I0829 19:40:12.642279   79559 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:40:12.666598   79559 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-672127" to be "Ready" ...
	I0829 19:40:12.682873   79559 node_ready.go:49] node "default-k8s-diff-port-672127" has status "Ready":"True"
	I0829 19:40:12.682895   79559 node_ready.go:38] duration metric: took 16.267143ms for node "default-k8s-diff-port-672127" to be "Ready" ...
	I0829 19:40:12.682904   79559 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:12.693451   79559 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:12.736525   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:40:12.736548   79559 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:40:12.754764   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:40:12.754786   79559 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:40:12.806826   79559 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:40:12.806856   79559 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:40:12.817164   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:40:12.837896   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:40:12.903140   79559 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:40:14.124266   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.307063383s)
	I0829 19:40:14.124305   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.286373382s)
	I0829 19:40:14.124324   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124337   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124343   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124368   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124430   79559 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.221258684s)
	I0829 19:40:14.124473   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124487   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124635   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124649   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124659   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124667   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124794   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.124813   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.124831   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124848   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124856   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124873   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124864   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124882   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.124896   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.124902   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.124913   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.124935   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.125356   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.125359   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.125381   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.126568   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) DBG | Closing plugin on server side
	I0829 19:40:14.126637   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.126656   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.126704   79559 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-672127"
	I0829 19:40:14.193216   79559 main.go:141] libmachine: Making call to close driver server
	I0829 19:40:14.193238   79559 main.go:141] libmachine: (default-k8s-diff-port-672127) Calling .Close
	I0829 19:40:14.193544   79559 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:40:14.193562   79559 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:40:14.195467   79559 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0829 19:40:12.237641   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:14.736679   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:14.196698   79559 addons.go:510] duration metric: took 1.751639165s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0829 19:40:14.720042   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:17.199482   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:17.235908   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.735901   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.199705   79559 pod_ready.go:103] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:19.699776   79559 pod_ready.go:93] pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:19.699801   79559 pod_ready.go:82] duration metric: took 7.006327617s for pod "etcd-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.699810   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.704240   79559 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:19.704261   79559 pod_ready.go:82] duration metric: took 4.444744ms for pod "kube-apiserver-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:19.704269   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.710740   79559 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.710761   79559 pod_ready.go:82] duration metric: took 2.006486043s for pod "kube-controller-manager-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.710770   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nqbn4" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.715111   79559 pod_ready.go:93] pod "kube-proxy-nqbn4" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.715134   79559 pod_ready.go:82] duration metric: took 4.357535ms for pod "kube-proxy-nqbn4" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.715146   79559 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.719192   79559 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace has status "Ready":"True"
	I0829 19:40:21.719207   79559 pod_ready.go:82] duration metric: took 4.054087ms for pod "kube-scheduler-default-k8s-diff-port-672127" in "kube-system" namespace to be "Ready" ...
	I0829 19:40:21.719222   79559 pod_ready.go:39] duration metric: took 9.036299009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:21.719234   79559 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:40:21.719289   79559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:40:21.734507   79559 api_server.go:72] duration metric: took 9.289477227s to wait for apiserver process to appear ...
	I0829 19:40:21.734531   79559 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:40:21.734555   79559 api_server.go:253] Checking apiserver healthz at https://192.168.50.70:8444/healthz ...
	I0829 19:40:21.739963   79559 api_server.go:279] https://192.168.50.70:8444/healthz returned 200:
	ok
	I0829 19:40:21.740847   79559 api_server.go:141] control plane version: v1.31.0
	I0829 19:40:21.740865   79559 api_server.go:131] duration metric: took 6.327694ms to wait for apiserver health ...
	I0829 19:40:21.740872   79559 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:40:21.747609   79559 system_pods.go:59] 9 kube-system pods found
	I0829 19:40:21.747636   79559 system_pods.go:61] "coredns-6f6b679f8f-5p2vn" [8f7749c2-4cb3-4372-8144-46109f9b89b7] Running
	I0829 19:40:21.747643   79559 system_pods.go:61] "coredns-6f6b679f8f-dxbt5" [84373054-e72e-469a-bf2f-101943117851] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:40:21.747648   79559 system_pods.go:61] "etcd-default-k8s-diff-port-672127" [aacacabd-96ed-4635-b550-0bff36ec6c36] Running
	I0829 19:40:21.747654   79559 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-672127" [6cf0f710-c339-438e-a572-2e8c498fa63a] Running
	I0829 19:40:21.747659   79559 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-672127" [aaa934bc-c350-4318-9246-a2bb4b7d77f1] Running
	I0829 19:40:21.747662   79559 system_pods.go:61] "kube-proxy-nqbn4" [c5b48a1f-725b-45b7-8a3f-0df0f3371d2f] Running
	I0829 19:40:21.747665   79559 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-672127" [fcf1ef19-acf4-41f2-a288-cfa8f978961b] Running
	I0829 19:40:21.747670   79559 system_pods.go:61] "metrics-server-6867b74b74-4p8qr" [8026c5c8-9f02-45a1-8cc8-9d485dc49cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:21.747674   79559 system_pods.go:61] "storage-provisioner" [5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00] Running
	I0829 19:40:21.747680   79559 system_pods.go:74] duration metric: took 6.803459ms to wait for pod list to return data ...
	I0829 19:40:21.747689   79559 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:40:21.750153   79559 default_sa.go:45] found service account: "default"
	I0829 19:40:21.750168   79559 default_sa.go:55] duration metric: took 2.474593ms for default service account to be created ...
	I0829 19:40:21.750175   79559 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:40:21.901186   79559 system_pods.go:86] 9 kube-system pods found
	I0829 19:40:21.901213   79559 system_pods.go:89] "coredns-6f6b679f8f-5p2vn" [8f7749c2-4cb3-4372-8144-46109f9b89b7] Running
	I0829 19:40:21.901219   79559 system_pods.go:89] "coredns-6f6b679f8f-dxbt5" [84373054-e72e-469a-bf2f-101943117851] Running
	I0829 19:40:21.901222   79559 system_pods.go:89] "etcd-default-k8s-diff-port-672127" [aacacabd-96ed-4635-b550-0bff36ec6c36] Running
	I0829 19:40:21.901227   79559 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-672127" [6cf0f710-c339-438e-a572-2e8c498fa63a] Running
	I0829 19:40:21.901231   79559 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-672127" [aaa934bc-c350-4318-9246-a2bb4b7d77f1] Running
	I0829 19:40:21.901235   79559 system_pods.go:89] "kube-proxy-nqbn4" [c5b48a1f-725b-45b7-8a3f-0df0f3371d2f] Running
	I0829 19:40:21.901238   79559 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-672127" [fcf1ef19-acf4-41f2-a288-cfa8f978961b] Running
	I0829 19:40:21.901245   79559 system_pods.go:89] "metrics-server-6867b74b74-4p8qr" [8026c5c8-9f02-45a1-8cc8-9d485dc49cbd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:40:21.901249   79559 system_pods.go:89] "storage-provisioner" [5193c8e4-cbf8-4cf5-a0fc-e18e4f105c00] Running
	I0829 19:40:21.901257   79559 system_pods.go:126] duration metric: took 151.07798ms to wait for k8s-apps to be running ...
	I0829 19:40:21.901263   79559 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:40:21.901306   79559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:21.916730   79559 system_svc.go:56] duration metric: took 15.457902ms WaitForService to wait for kubelet
	I0829 19:40:21.916757   79559 kubeadm.go:582] duration metric: took 9.471732105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:40:21.916773   79559 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:40:22.099083   79559 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:40:22.099119   79559 node_conditions.go:123] node cpu capacity is 2
	I0829 19:40:22.099133   79559 node_conditions.go:105] duration metric: took 182.354927ms to run NodePressure ...
	I0829 19:40:22.099147   79559 start.go:241] waiting for startup goroutines ...
	I0829 19:40:22.099156   79559 start.go:246] waiting for cluster config update ...
	I0829 19:40:22.099168   79559 start.go:255] writing updated cluster config ...
	I0829 19:40:22.099536   79559 ssh_runner.go:195] Run: rm -f paused
	I0829 19:40:22.148307   79559 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:40:22.150361   79559 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-672127" cluster and "default" namespace by default
	I0829 19:40:21.736121   78865 pod_ready.go:103] pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace has status "Ready":"False"
	I0829 19:40:23.229905   78865 pod_ready.go:82] duration metric: took 4m0.000141946s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" ...
	E0829 19:40:23.229943   78865 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-svnwn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0829 19:40:23.229991   78865 pod_ready.go:39] duration metric: took 4m10.70989222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:40:23.230021   78865 kubeadm.go:597] duration metric: took 4m18.600330645s to restartPrimaryControlPlane
	W0829 19:40:23.230078   78865 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 19:40:23.230136   78865 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:40:25.762989   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:40:25.763689   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:25.763863   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:30.764613   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:30.764821   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:40.765517   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:40:40.765752   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:40:49.374221   78865 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.144057875s)
	I0829 19:40:49.374297   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:40:49.389586   78865 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 19:40:49.399146   78865 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:40:49.408450   78865 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:40:49.408469   78865 kubeadm.go:157] found existing configuration files:
	
	I0829 19:40:49.408521   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:40:49.417651   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:40:49.417706   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:40:49.427073   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:40:49.435307   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:40:49.435356   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:40:49.443720   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:40:49.452437   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:40:49.452493   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:40:49.461133   78865 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:40:49.469515   78865 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:40:49.469564   78865 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:40:49.478224   78865 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:40:49.523193   78865 kubeadm.go:310] W0829 19:40:49.504457    3026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:49.523801   78865 kubeadm.go:310] W0829 19:40:49.505165    3026 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 19:40:49.640221   78865 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:40:57.429227   78865 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 19:40:57.429293   78865 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:40:57.429396   78865 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:40:57.429536   78865 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:40:57.429665   78865 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 19:40:57.429757   78865 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:40:57.431358   78865 out.go:235]   - Generating certificates and keys ...
	I0829 19:40:57.431434   78865 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:40:57.431485   78865 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:40:57.431566   78865 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:40:57.431640   78865 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:40:57.431711   78865 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:40:57.431786   78865 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:40:57.431847   78865 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:40:57.431893   78865 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:40:57.431956   78865 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:40:57.432013   78865 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:40:57.432052   78865 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:40:57.432109   78865 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:40:57.432186   78865 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:40:57.432275   78865 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 19:40:57.432352   78865 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:40:57.432444   78865 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:40:57.432518   78865 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:40:57.432595   78865 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:40:57.432648   78865 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:40:57.434057   78865 out.go:235]   - Booting up control plane ...
	I0829 19:40:57.434161   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:40:57.434245   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:40:57.434298   78865 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:40:57.434396   78865 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:40:57.434475   78865 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:40:57.434509   78865 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:40:57.434687   78865 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 19:40:57.434772   78865 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 19:40:57.434824   78865 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 509.075612ms
	I0829 19:40:57.434887   78865 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 19:40:57.434932   78865 kubeadm.go:310] [api-check] The API server is healthy after 5.002117161s
	I0829 19:40:57.435094   78865 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 19:40:57.435232   78865 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 19:40:57.435284   78865 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 19:40:57.435429   78865 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-690795 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 19:40:57.435472   78865 kubeadm.go:310] [bootstrap-token] Using token: adxyev.rcmf9k5ok190h0g1
	I0829 19:40:57.436846   78865 out.go:235]   - Configuring RBAC rules ...
	I0829 19:40:57.436936   78865 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 19:40:57.437001   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 19:40:57.437113   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 19:40:57.437214   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 19:40:57.437307   78865 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 19:40:57.437380   78865 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 19:40:57.437480   78865 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 19:40:57.437528   78865 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 19:40:57.437577   78865 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 19:40:57.437583   78865 kubeadm.go:310] 
	I0829 19:40:57.437635   78865 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 19:40:57.437641   78865 kubeadm.go:310] 
	I0829 19:40:57.437704   78865 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 19:40:57.437710   78865 kubeadm.go:310] 
	I0829 19:40:57.437744   78865 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 19:40:57.437807   78865 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 19:40:57.437851   78865 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 19:40:57.437857   78865 kubeadm.go:310] 
	I0829 19:40:57.437907   78865 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 19:40:57.437913   78865 kubeadm.go:310] 
	I0829 19:40:57.437951   78865 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 19:40:57.437957   78865 kubeadm.go:310] 
	I0829 19:40:57.438000   78865 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 19:40:57.438107   78865 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 19:40:57.438188   78865 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 19:40:57.438200   78865 kubeadm.go:310] 
	I0829 19:40:57.438289   78865 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 19:40:57.438359   78865 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 19:40:57.438364   78865 kubeadm.go:310] 
	I0829 19:40:57.438429   78865 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token adxyev.rcmf9k5ok190h0g1 \
	I0829 19:40:57.438507   78865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 \
	I0829 19:40:57.438525   78865 kubeadm.go:310] 	--control-plane 
	I0829 19:40:57.438534   78865 kubeadm.go:310] 
	I0829 19:40:57.438611   78865 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 19:40:57.438621   78865 kubeadm.go:310] 
	I0829 19:40:57.438688   78865 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token adxyev.rcmf9k5ok190h0g1 \
	I0829 19:40:57.438791   78865 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bea94402a7ca0cb3f07ae7c23e7390481fce43bb96a9834139cea2e2e1c9f0c4 
	I0829 19:40:57.438814   78865 cni.go:84] Creating CNI manager for ""
	I0829 19:40:57.438825   78865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 19:40:57.440836   78865 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 19:40:57.442065   78865 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 19:40:57.452700   78865 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 19:40:57.469549   78865 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 19:40:57.469621   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:57.469656   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-690795 minikube.k8s.io/updated_at=2024_08_29T19_40_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=no-preload-690795 minikube.k8s.io/primary=true
	I0829 19:40:57.503411   78865 ops.go:34] apiserver oom_adj: -16
	I0829 19:40:57.648807   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:58.149067   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:58.649770   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:59.148932   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:40:59.649114   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:00.149833   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:00.649474   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.149795   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.649154   78865 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 19:41:01.745084   78865 kubeadm.go:1113] duration metric: took 4.275525047s to wait for elevateKubeSystemPrivileges
	I0829 19:41:01.745117   78865 kubeadm.go:394] duration metric: took 4m57.169926854s to StartCluster
	I0829 19:41:01.745134   78865 settings.go:142] acquiring lock: {Name:mk0690d595743a6b5a8610003c7d1ba188d72206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:41:01.745209   78865 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:41:01.746775   78865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-13056/kubeconfig: {Name:mkd82741dadfddc0358628d049361dead34c468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 19:41:01.747005   78865 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0829 19:41:01.747062   78865 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 19:41:01.747155   78865 addons.go:69] Setting storage-provisioner=true in profile "no-preload-690795"
	I0829 19:41:01.747175   78865 addons.go:69] Setting default-storageclass=true in profile "no-preload-690795"
	I0829 19:41:01.747189   78865 addons.go:234] Setting addon storage-provisioner=true in "no-preload-690795"
	W0829 19:41:01.747199   78865 addons.go:243] addon storage-provisioner should already be in state true
	I0829 19:41:01.747200   78865 addons.go:69] Setting metrics-server=true in profile "no-preload-690795"
	I0829 19:41:01.747240   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.747246   78865 config.go:182] Loaded profile config "no-preload-690795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:41:01.747243   78865 addons.go:234] Setting addon metrics-server=true in "no-preload-690795"
	W0829 19:41:01.747307   78865 addons.go:243] addon metrics-server should already be in state true
	I0829 19:41:01.747333   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.747206   78865 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-690795"
	I0829 19:41:01.747652   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747670   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747678   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.747702   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.747780   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.747810   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.748790   78865 out.go:177] * Verifying Kubernetes components...
	I0829 19:41:01.750069   78865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 19:41:01.764006   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0829 19:41:01.765511   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.766194   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.766218   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.766287   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43301
	I0829 19:41:01.766670   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.766694   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.766912   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.766965   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I0829 19:41:01.767129   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.767149   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.767304   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.767506   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.767737   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.767755   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.768073   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.768202   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.768241   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.768615   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.768646   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.771065   78865 addons.go:234] Setting addon default-storageclass=true in "no-preload-690795"
	W0829 19:41:01.771088   78865 addons.go:243] addon default-storageclass should already be in state true
	I0829 19:41:01.771117   78865 host.go:66] Checking if "no-preload-690795" exists ...
	I0829 19:41:01.771415   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.771441   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.787271   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I0829 19:41:01.788003   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.788577   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.788606   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.788885   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0829 19:41:01.789065   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0829 19:41:01.789073   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.789361   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.789716   78865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 19:41:01.789754   78865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 19:41:01.789754   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.789774   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.790084   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.790243   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.790319   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.791018   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.791029   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.791393   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.791721   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.792306   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.793557   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.793806   78865 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 19:41:01.794942   78865 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0829 19:41:01.795033   78865 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:41:01.795049   78865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 19:41:01.795067   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.796032   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 19:41:01.796048   78865 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 19:41:01.796065   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.799646   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800163   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800618   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.800644   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800826   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.800843   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.800941   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.801043   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.801114   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.801184   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.801239   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.801363   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.801367   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.801484   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.807187   78865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36385
	I0829 19:41:01.807604   78865 main.go:141] libmachine: () Calling .GetVersion
	I0829 19:41:01.808056   78865 main.go:141] libmachine: Using API Version  1
	I0829 19:41:01.808070   78865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 19:41:01.808471   78865 main.go:141] libmachine: () Calling .GetMachineName
	I0829 19:41:01.808671   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetState
	I0829 19:41:01.810374   78865 main.go:141] libmachine: (no-preload-690795) Calling .DriverName
	I0829 19:41:01.810569   78865 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 19:41:01.810579   78865 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 19:41:01.810591   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHHostname
	I0829 19:41:01.813314   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.813766   78865 main.go:141] libmachine: (no-preload-690795) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:48:ed", ip: ""} in network mk-no-preload-690795: {Iface:virbr1 ExpiryTime:2024-08-29 20:35:39 +0000 UTC Type:0 Mac:52:54:00:2b:48:ed Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-690795 Clientid:01:52:54:00:2b:48:ed}
	I0829 19:41:01.813776   78865 main.go:141] libmachine: (no-preload-690795) DBG | domain no-preload-690795 has defined IP address 192.168.39.76 and MAC address 52:54:00:2b:48:ed in network mk-no-preload-690795
	I0829 19:41:01.814029   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHPort
	I0829 19:41:01.814187   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHKeyPath
	I0829 19:41:01.814292   78865 main.go:141] libmachine: (no-preload-690795) Calling .GetSSHUsername
	I0829 19:41:01.814379   78865 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/no-preload-690795/id_rsa Username:docker}
	I0829 19:41:01.963011   78865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 19:41:01.981935   78865 node_ready.go:35] waiting up to 6m0s for node "no-preload-690795" to be "Ready" ...
	I0829 19:41:01.998366   78865 node_ready.go:49] node "no-preload-690795" has status "Ready":"True"
	I0829 19:41:01.998389   78865 node_ready.go:38] duration metric: took 16.418591ms for node "no-preload-690795" to be "Ready" ...
	I0829 19:41:01.998398   78865 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:41:02.005811   78865 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:02.053495   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 19:41:02.197657   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 19:41:02.239853   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 19:41:02.239877   78865 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0829 19:41:02.270764   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 19:41:02.270789   78865 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 19:41:02.327819   78865 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:41:02.327853   78865 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 19:41:02.380812   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.380843   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.381117   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.381191   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:02.381209   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.381217   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.381432   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.381444   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:02.384211   78865 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 19:41:02.387013   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:02.387027   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:02.387286   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:02.387333   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:02.387345   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.027502   78865 pod_ready.go:93] pod "etcd-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:03.027535   78865 pod_ready.go:82] duration metric: took 1.02170157s for pod "etcd-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:03.027550   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:03.410428   78865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212715771s)
	I0829 19:41:03.410485   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.410503   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.412586   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.412590   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.412614   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.412625   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.412632   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.412926   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.412947   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.412954   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.587379   78865 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.203116606s)
	I0829 19:41:03.587437   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.587452   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.587770   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.587840   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.587859   78865 main.go:141] libmachine: Making call to close driver server
	I0829 19:41:03.587874   78865 main.go:141] libmachine: (no-preload-690795) Calling .Close
	I0829 19:41:03.587878   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.588185   78865 main.go:141] libmachine: Successfully made call to close driver server
	I0829 19:41:03.588206   78865 main.go:141] libmachine: Making call to close connection to plugin binary
	I0829 19:41:03.588218   78865 addons.go:475] Verifying addon metrics-server=true in "no-preload-690795"
	I0829 19:41:03.588192   78865 main.go:141] libmachine: (no-preload-690795) DBG | Closing plugin on server side
	I0829 19:41:03.590131   78865 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0829 19:41:00.767158   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:00.767429   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:03.591280   78865 addons.go:510] duration metric: took 1.844219817s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0829 19:41:05.035315   78865 pod_ready.go:103] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"False"
	I0829 19:41:06.033037   78865 pod_ready.go:93] pod "kube-apiserver-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:06.033060   78865 pod_ready.go:82] duration metric: took 3.005501862s for pod "kube-apiserver-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:06.033068   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.039035   78865 pod_ready.go:93] pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.039059   78865 pod_ready.go:82] duration metric: took 1.005984859s for pod "kube-controller-manager-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.039068   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p7zvh" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.043096   78865 pod_ready.go:93] pod "kube-proxy-p7zvh" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.043116   78865 pod_ready.go:82] duration metric: took 4.042896ms for pod "kube-proxy-p7zvh" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.043125   78865 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.046934   78865 pod_ready.go:93] pod "kube-scheduler-no-preload-690795" in "kube-system" namespace has status "Ready":"True"
	I0829 19:41:07.046957   78865 pod_ready.go:82] duration metric: took 3.826283ms for pod "kube-scheduler-no-preload-690795" in "kube-system" namespace to be "Ready" ...
	I0829 19:41:07.046966   78865 pod_ready.go:39] duration metric: took 5.048560252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 19:41:07.046983   78865 api_server.go:52] waiting for apiserver process to appear ...
	I0829 19:41:07.047036   78865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:41:07.062234   78865 api_server.go:72] duration metric: took 5.315200823s to wait for apiserver process to appear ...
	I0829 19:41:07.062256   78865 api_server.go:88] waiting for apiserver healthz status ...
	I0829 19:41:07.062277   78865 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0829 19:41:07.068022   78865 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0829 19:41:07.069170   78865 api_server.go:141] control plane version: v1.31.0
	I0829 19:41:07.069190   78865 api_server.go:131] duration metric: took 6.927858ms to wait for apiserver health ...
	I0829 19:41:07.069198   78865 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 19:41:07.075909   78865 system_pods.go:59] 9 kube-system pods found
	I0829 19:41:07.075932   78865 system_pods.go:61] "coredns-6f6b679f8f-wr7bq" [ac054ab5-3a0e-433e-add6-5817ce6f1c27] Running
	I0829 19:41:07.075939   78865 system_pods.go:61] "coredns-6f6b679f8f-xbfb6" [a94d281f-1fdb-4e33-a060-17cd5981462c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:41:07.075944   78865 system_pods.go:61] "etcd-no-preload-690795" [a13c0d5b-fb99-4c57-8401-b0580a0e97a7] Running
	I0829 19:41:07.075949   78865 system_pods.go:61] "kube-apiserver-no-preload-690795" [1ec66a25-5b6d-4b81-8c44-b4dffad1ec12] Running
	I0829 19:41:07.075953   78865 system_pods.go:61] "kube-controller-manager-no-preload-690795" [d670e6eb-006f-4fce-a72f-f266fda72ccc] Running
	I0829 19:41:07.075956   78865 system_pods.go:61] "kube-proxy-p7zvh" [14f4576d-3d3e-4848-9350-3348293318aa] Running
	I0829 19:41:07.075960   78865 system_pods.go:61] "kube-scheduler-no-preload-690795" [1b4dfa7f-b043-4a38-a250-2e51eabf1b33] Running
	I0829 19:41:07.075964   78865 system_pods.go:61] "metrics-server-6867b74b74-shs88" [cd53f408-7f8a-40ae-93f3-7a00c8ae6646] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:41:07.075968   78865 system_pods.go:61] "storage-provisioner" [df10c563-06d8-48f8-a6e4-35837195a25d] Running
	I0829 19:41:07.075975   78865 system_pods.go:74] duration metric: took 6.771333ms to wait for pod list to return data ...
	I0829 19:41:07.075985   78865 default_sa.go:34] waiting for default service account to be created ...
	I0829 19:41:07.079235   78865 default_sa.go:45] found service account: "default"
	I0829 19:41:07.079255   78865 default_sa.go:55] duration metric: took 3.264804ms for default service account to be created ...
	I0829 19:41:07.079263   78865 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 19:41:07.083981   78865 system_pods.go:86] 9 kube-system pods found
	I0829 19:41:07.084006   78865 system_pods.go:89] "coredns-6f6b679f8f-wr7bq" [ac054ab5-3a0e-433e-add6-5817ce6f1c27] Running
	I0829 19:41:07.084014   78865 system_pods.go:89] "coredns-6f6b679f8f-xbfb6" [a94d281f-1fdb-4e33-a060-17cd5981462c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 19:41:07.084019   78865 system_pods.go:89] "etcd-no-preload-690795" [a13c0d5b-fb99-4c57-8401-b0580a0e97a7] Running
	I0829 19:41:07.084025   78865 system_pods.go:89] "kube-apiserver-no-preload-690795" [1ec66a25-5b6d-4b81-8c44-b4dffad1ec12] Running
	I0829 19:41:07.084029   78865 system_pods.go:89] "kube-controller-manager-no-preload-690795" [d670e6eb-006f-4fce-a72f-f266fda72ccc] Running
	I0829 19:41:07.084032   78865 system_pods.go:89] "kube-proxy-p7zvh" [14f4576d-3d3e-4848-9350-3348293318aa] Running
	I0829 19:41:07.084037   78865 system_pods.go:89] "kube-scheduler-no-preload-690795" [1b4dfa7f-b043-4a38-a250-2e51eabf1b33] Running
	I0829 19:41:07.084042   78865 system_pods.go:89] "metrics-server-6867b74b74-shs88" [cd53f408-7f8a-40ae-93f3-7a00c8ae6646] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 19:41:07.084045   78865 system_pods.go:89] "storage-provisioner" [df10c563-06d8-48f8-a6e4-35837195a25d] Running
	I0829 19:41:07.084052   78865 system_pods.go:126] duration metric: took 4.784448ms to wait for k8s-apps to be running ...
	I0829 19:41:07.084062   78865 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 19:41:07.084104   78865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:07.098513   78865 system_svc.go:56] duration metric: took 14.440998ms WaitForService to wait for kubelet
	I0829 19:41:07.098551   78865 kubeadm.go:582] duration metric: took 5.351518255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 19:41:07.098574   78865 node_conditions.go:102] verifying NodePressure condition ...
	I0829 19:41:07.231160   78865 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 19:41:07.231189   78865 node_conditions.go:123] node cpu capacity is 2
	I0829 19:41:07.231200   78865 node_conditions.go:105] duration metric: took 132.62068ms to run NodePressure ...
	I0829 19:41:07.231209   78865 start.go:241] waiting for startup goroutines ...
	I0829 19:41:07.231216   78865 start.go:246] waiting for cluster config update ...
	I0829 19:41:07.231225   78865 start.go:255] writing updated cluster config ...
	I0829 19:41:07.231503   78865 ssh_runner.go:195] Run: rm -f paused
	I0829 19:41:07.283204   78865 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 19:41:07.284751   78865 out.go:177] * Done! kubectl is now configured to use "no-preload-690795" cluster and "default" namespace by default
	I0829 19:41:40.770350   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:41:40.770652   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:41:40.770684   79869 kubeadm.go:310] 
	I0829 19:41:40.770740   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:41:40.770802   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:41:40.770818   79869 kubeadm.go:310] 
	I0829 19:41:40.770862   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:41:40.770917   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:41:40.771047   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:41:40.771057   79869 kubeadm.go:310] 
	I0829 19:41:40.771202   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:41:40.771254   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:41:40.771309   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:41:40.771320   79869 kubeadm.go:310] 
	I0829 19:41:40.771447   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:41:40.771565   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:41:40.771576   79869 kubeadm.go:310] 
	I0829 19:41:40.771675   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:41:40.771776   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:41:40.771900   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:41:40.771997   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:41:40.772010   79869 kubeadm.go:310] 
	I0829 19:41:40.772984   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:41:40.773093   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:41:40.773213   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0829 19:41:40.773353   79869 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0829 19:41:40.773398   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0829 19:41:41.224263   79869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:41:41.239310   79869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 19:41:41.249121   79869 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 19:41:41.249142   79869 kubeadm.go:157] found existing configuration files:
	
	I0829 19:41:41.249195   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 19:41:41.258534   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 19:41:41.258591   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 19:41:41.267814   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 19:41:41.276813   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 19:41:41.276871   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 19:41:41.286937   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.296364   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 19:41:41.296435   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 19:41:41.306574   79869 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 19:41:41.315824   79869 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 19:41:41.315899   79869 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 19:41:41.325290   79869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 19:41:41.389915   79869 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0829 19:41:41.390071   79869 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 19:41:41.529956   79869 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 19:41:41.530108   79869 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 19:41:41.530226   79869 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 19:41:41.709310   79869 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 19:41:41.711945   79869 out.go:235]   - Generating certificates and keys ...
	I0829 19:41:41.712051   79869 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 19:41:41.712127   79869 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 19:41:41.712225   79869 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 19:41:41.712308   79869 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 19:41:41.712402   79869 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 19:41:41.712466   79869 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 19:41:41.712551   79869 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 19:41:41.712622   79869 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 19:41:41.712727   79869 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 19:41:41.712831   79869 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 19:41:41.712865   79869 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 19:41:41.712912   79869 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 19:41:41.790778   79869 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 19:41:41.993240   79869 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 19:41:42.180389   79869 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 19:41:42.248561   79869 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 19:41:42.272297   79869 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 19:41:42.273147   79869 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 19:41:42.273249   79869 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 19:41:42.421783   79869 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 19:41:42.424669   79869 out.go:235]   - Booting up control plane ...
	I0829 19:41:42.424781   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 19:41:42.434145   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 19:41:42.437026   79869 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 19:41:42.437823   79869 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 19:41:42.441047   79869 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 19:42:22.439545   79869 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0829 19:42:22.439898   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:22.440093   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:27.439985   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:27.440226   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:37.440067   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:37.440333   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:42:57.439710   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:42:57.439891   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.439862   79869 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0829 19:43:37.440057   79869 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0829 19:43:37.440081   79869 kubeadm.go:310] 
	I0829 19:43:37.440118   79869 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0829 19:43:37.440173   79869 kubeadm.go:310] 		timed out waiting for the condition
	I0829 19:43:37.440181   79869 kubeadm.go:310] 
	I0829 19:43:37.440213   79869 kubeadm.go:310] 	This error is likely caused by:
	I0829 19:43:37.440265   79869 kubeadm.go:310] 		- The kubelet is not running
	I0829 19:43:37.440376   79869 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0829 19:43:37.440384   79869 kubeadm.go:310] 
	I0829 19:43:37.440503   79869 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0829 19:43:37.440551   79869 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0829 19:43:37.440605   79869 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0829 19:43:37.440618   79869 kubeadm.go:310] 
	I0829 19:43:37.440763   79869 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0829 19:43:37.440893   79869 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0829 19:43:37.440904   79869 kubeadm.go:310] 
	I0829 19:43:37.441013   79869 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0829 19:43:37.441146   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0829 19:43:37.441255   79869 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0829 19:43:37.441367   79869 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0829 19:43:37.441380   79869 kubeadm.go:310] 
	I0829 19:43:37.441848   79869 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 19:43:37.441958   79869 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0829 19:43:37.442039   79869 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0829 19:43:37.442126   79869 kubeadm.go:394] duration metric: took 8m1.388269811s to StartCluster
	I0829 19:43:37.442174   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0829 19:43:37.442230   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0829 19:43:37.483512   79869 cri.go:89] found id: ""
	I0829 19:43:37.483544   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.483554   79869 logs.go:278] No container was found matching "kube-apiserver"
	I0829 19:43:37.483560   79869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0829 19:43:37.483617   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0829 19:43:37.518325   79869 cri.go:89] found id: ""
	I0829 19:43:37.518353   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.518361   79869 logs.go:278] No container was found matching "etcd"
	I0829 19:43:37.518368   79869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0829 19:43:37.518426   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0829 19:43:37.554541   79869 cri.go:89] found id: ""
	I0829 19:43:37.554563   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.554574   79869 logs.go:278] No container was found matching "coredns"
	I0829 19:43:37.554582   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0829 19:43:37.554650   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0829 19:43:37.589041   79869 cri.go:89] found id: ""
	I0829 19:43:37.589069   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.589076   79869 logs.go:278] No container was found matching "kube-scheduler"
	I0829 19:43:37.589083   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0829 19:43:37.589132   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0829 19:43:37.624451   79869 cri.go:89] found id: ""
	I0829 19:43:37.624479   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.624491   79869 logs.go:278] No container was found matching "kube-proxy"
	I0829 19:43:37.624499   79869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0829 19:43:37.624554   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0829 19:43:37.660162   79869 cri.go:89] found id: ""
	I0829 19:43:37.660186   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.660193   79869 logs.go:278] No container was found matching "kube-controller-manager"
	I0829 19:43:37.660199   79869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0829 19:43:37.660249   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0829 19:43:37.696806   79869 cri.go:89] found id: ""
	I0829 19:43:37.696836   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.696844   79869 logs.go:278] No container was found matching "kindnet"
	I0829 19:43:37.696850   79869 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0829 19:43:37.696898   79869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0829 19:43:37.732828   79869 cri.go:89] found id: ""
	I0829 19:43:37.732851   79869 logs.go:276] 0 containers: []
	W0829 19:43:37.732860   79869 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0829 19:43:37.732871   79869 logs.go:123] Gathering logs for container status ...
	I0829 19:43:37.732887   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 19:43:37.772219   79869 logs.go:123] Gathering logs for kubelet ...
	I0829 19:43:37.772247   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 19:43:37.823967   79869 logs.go:123] Gathering logs for dmesg ...
	I0829 19:43:37.824003   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 19:43:37.838884   79869 logs.go:123] Gathering logs for describe nodes ...
	I0829 19:43:37.838906   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0829 19:43:37.915184   79869 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0829 19:43:37.915206   79869 logs.go:123] Gathering logs for CRI-O ...
	I0829 19:43:37.915222   79869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0829 19:43:38.020759   79869 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0829 19:43:38.020827   79869 out.go:270] * 
	W0829 19:43:38.020882   79869 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.020897   79869 out.go:270] * 
	W0829 19:43:38.021777   79869 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 19:43:38.024855   79869 out.go:201] 
	W0829 19:43:38.025860   79869 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0829 19:43:38.025905   79869 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0829 19:43:38.025936   79869 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0829 19:43:38.027175   79869 out.go:201] 
	
	
	==> CRI-O <==
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.079115016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961327079091882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=324b0bfb-4d35-4e41-bc23-d3bad369f1d3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.079560715Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7255f280-82f1-4811-9fec-35e0562c815f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.079608651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7255f280-82f1-4811-9fec-35e0562c815f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.079644012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7255f280-82f1-4811-9fec-35e0562c815f name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.110115573Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f89e838-3f20-4139-a639-15f4355a88d3 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.110199476Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f89e838-3f20-4139-a639-15f4355a88d3 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.111614858Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6db0532-3e22-4d1b-97b2-5a6b74b9dbe6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.112063769Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961327112032849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6db0532-3e22-4d1b-97b2-5a6b74b9dbe6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.112592990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9eac8d53-ae2c-42a0-b005-7fb04cfc64df name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.112645547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9eac8d53-ae2c-42a0-b005-7fb04cfc64df name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.112685625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9eac8d53-ae2c-42a0-b005-7fb04cfc64df name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.143053902Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22a8bca5-68c2-468c-a432-926122d89afe name=/runtime.v1.RuntimeService/Version
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.143129377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22a8bca5-68c2-468c-a432-926122d89afe name=/runtime.v1.RuntimeService/Version
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.144259602Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2433d111-8a5b-4d24-9612-2724af9618f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.144739774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961327144712365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2433d111-8a5b-4d24-9612-2724af9618f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.145245324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c956c20f-36fa-4062-8268-0504a2b85cf5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.145308563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c956c20f-36fa-4062-8268-0504a2b85cf5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.145352472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c956c20f-36fa-4062-8268-0504a2b85cf5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.176135278Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ade0c854-7444-4d72-a35c-96ad6f49d058 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.176227604Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ade0c854-7444-4d72-a35c-96ad6f49d058 name=/runtime.v1.RuntimeService/Version
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.177389543Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0acdff5c-6fcc-4156-b681-915a9dec5826 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.177855611Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724961327177827862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0acdff5c-6fcc-4156-b681-915a9dec5826 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.178337785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebec95c3-9dfd-486a-819a-7f0eb2727377 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.178407349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebec95c3-9dfd-486a-819a-7f0eb2727377 name=/runtime.v1.RuntimeService/ListContainers
	Aug 29 19:55:27 old-k8s-version-467349 crio[629]: time="2024-08-29 19:55:27.178446496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ebec95c3-9dfd-486a-819a-7f0eb2727377 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug29 19:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052596] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039104] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.969920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.984718] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.595405] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.892866] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.060569] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055946] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.216571] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.121311] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.242095] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +6.546376] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.055907] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.984348] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[ +14.158991] kauditd_printk_skb: 46 callbacks suppressed
	[Aug29 19:39] systemd-fstab-generator[5113]: Ignoring "noauto" option for root device
	[Aug29 19:41] systemd-fstab-generator[5395]: Ignoring "noauto" option for root device
	[  +0.067610] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:55:27 up 20 min,  0 users,  load average: 0.02, 0.05, 0.03
	Linux old-k8s-version-467349 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000cd9a70)
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]: goroutine 151 [select]:
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00075fef0, 0x4f0ac20, 0xc000cb9c20, 0x1, 0xc00009e0c0)
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00023e460, 0xc00009e0c0)
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000cc8f40, 0xc000334bc0)
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 29 19:55:24 old-k8s-version-467349 kubelet[6919]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 29 19:55:24 old-k8s-version-467349 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 29 19:55:24 old-k8s-version-467349 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 29 19:55:25 old-k8s-version-467349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 143.
	Aug 29 19:55:25 old-k8s-version-467349 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 29 19:55:25 old-k8s-version-467349 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 29 19:55:25 old-k8s-version-467349 kubelet[6928]: I0829 19:55:25.386397    6928 server.go:416] Version: v1.20.0
	Aug 29 19:55:25 old-k8s-version-467349 kubelet[6928]: I0829 19:55:25.386863    6928 server.go:837] Client rotation is on, will bootstrap in background
	Aug 29 19:55:25 old-k8s-version-467349 kubelet[6928]: I0829 19:55:25.388878    6928 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 29 19:55:25 old-k8s-version-467349 kubelet[6928]: I0829 19:55:25.390198    6928 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 29 19:55:25 old-k8s-version-467349 kubelet[6928]: W0829 19:55:25.390234    6928 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467349 -n old-k8s-version-467349
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 2 (220.691164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-467349" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (163.94s)

                                                
                                    

Test pass (245/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 28.67
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 15.96
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.95
22 TestOffline 79.79
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
27 TestAddons/Setup 132.43
31 TestAddons/serial/GCPAuth/Namespaces 0.14
35 TestAddons/parallel/InspektorGadget 12.18
37 TestAddons/parallel/HelmTiller 11.58
39 TestAddons/parallel/CSI 47.38
40 TestAddons/parallel/Headlamp 12.22
41 TestAddons/parallel/CloudSpanner 6.55
42 TestAddons/parallel/LocalPath 58.28
43 TestAddons/parallel/NvidiaDevicePlugin 6.49
44 TestAddons/parallel/Yakd 11.87
45 TestAddons/StoppedEnableDisable 7.54
46 TestCertOptions 75.61
47 TestCertExpiration 300.55
49 TestForceSystemdFlag 67.12
50 TestForceSystemdEnv 43.05
52 TestKVMDriverInstallOrUpdate 5
56 TestErrorSpam/setup 37.35
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.69
59 TestErrorSpam/pause 1.53
60 TestErrorSpam/unpause 1.72
61 TestErrorSpam/stop 5.96
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 52.81
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 39.56
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.95
73 TestFunctional/serial/CacheCmd/cache/add_local 2.1
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 32.84
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.39
84 TestFunctional/serial/LogsFileCmd 1.38
85 TestFunctional/serial/InvalidService 4.37
87 TestFunctional/parallel/ConfigCmd 0.29
88 TestFunctional/parallel/DashboardCmd 14.35
89 TestFunctional/parallel/DryRun 0.28
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.79
95 TestFunctional/parallel/ServiceCmdConnect 20.52
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 46.59
99 TestFunctional/parallel/SSHCmd 0.53
100 TestFunctional/parallel/CpCmd 1.36
101 TestFunctional/parallel/MySQL 22.14
102 TestFunctional/parallel/FileSync 0.2
103 TestFunctional/parallel/CertSync 1.46
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
111 TestFunctional/parallel/License 0.59
112 TestFunctional/parallel/MountCmd/any-port 18.38
113 TestFunctional/parallel/Version/short 0.04
114 TestFunctional/parallel/Version/components 0.62
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
119 TestFunctional/parallel/ImageCommands/ImageBuild 6.64
120 TestFunctional/parallel/ImageCommands/Setup 1.75
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.95
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.78
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
125 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
126 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.94
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.04
128 TestFunctional/parallel/ServiceCmd/DeployApp 17.16
129 TestFunctional/parallel/MountCmd/specific-port 1.92
130 TestFunctional/parallel/MountCmd/VerifyCleanup 0.67
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
132 TestFunctional/parallel/ProfileCmd/profile_list 0.28
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
137 TestFunctional/parallel/ServiceCmd/List 1.35
147 TestFunctional/parallel/ServiceCmd/JSONOutput 1.28
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
149 TestFunctional/parallel/ServiceCmd/Format 0.36
150 TestFunctional/parallel/ServiceCmd/URL 0.4
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 195.62
158 TestMultiControlPlane/serial/DeployApp 6.94
159 TestMultiControlPlane/serial/PingHostFromPods 1.15
160 TestMultiControlPlane/serial/AddWorkerNode 57.31
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.51
163 TestMultiControlPlane/serial/CopyFile 12.26
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.33
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 358.18
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
174 TestMultiControlPlane/serial/AddSecondaryNode 79.31
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.51
179 TestJSONOutput/start/Command 83.74
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.67
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.59
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.62
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 83.53
211 TestMountStart/serial/StartWithMountFirst 28.99
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 27.38
214 TestMountStart/serial/VerifyMountSecond 0.35
215 TestMountStart/serial/DeleteFirst 0.66
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.26
218 TestMountStart/serial/RestartStopped 22.62
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 111.98
223 TestMultiNode/serial/DeployApp2Nodes 6.46
224 TestMultiNode/serial/PingHostFrom2Pods 0.76
225 TestMultiNode/serial/AddNode 46.7
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.99
229 TestMultiNode/serial/StopNode 2.17
230 TestMultiNode/serial/StartAfterStop 40
232 TestMultiNode/serial/DeleteNode 2.19
234 TestMultiNode/serial/RestartMultiNode 178.76
235 TestMultiNode/serial/ValidateNameConflict 43.86
242 TestScheduledStopUnix 114.38
246 TestRunningBinaryUpgrade 196.13
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 93.29
253 TestStoppedBinaryUpgrade/Setup 2.32
254 TestStoppedBinaryUpgrade/Upgrade 128.13
255 TestNoKubernetes/serial/StartWithStopK8s 38.83
256 TestNoKubernetes/serial/Start 29.5
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
258 TestNoKubernetes/serial/ProfileList 32.73
259 TestNoKubernetes/serial/Stop 2.36
260 TestNoKubernetes/serial/StartNoArgs 43.08
261 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
270 TestNetworkPlugins/group/false 2.91
282 TestPause/serial/Start 78.09
283 TestNetworkPlugins/group/auto/Start 114.56
285 TestNetworkPlugins/group/kindnet/Start 62.27
286 TestNetworkPlugins/group/calico/Start 102.13
287 TestNetworkPlugins/group/auto/KubeletFlags 0.19
288 TestNetworkPlugins/group/auto/NetCatPod 9.22
289 TestNetworkPlugins/group/auto/DNS 0.18
290 TestNetworkPlugins/group/auto/Localhost 0.14
291 TestNetworkPlugins/group/auto/HairPin 0.15
292 TestNetworkPlugins/group/custom-flannel/Start 87.43
293 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
294 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
295 TestNetworkPlugins/group/kindnet/NetCatPod 10.21
296 TestNetworkPlugins/group/kindnet/DNS 0.18
297 TestNetworkPlugins/group/kindnet/Localhost 0.14
298 TestNetworkPlugins/group/kindnet/HairPin 0.14
299 TestNetworkPlugins/group/enable-default-cni/Start 58.29
300 TestNetworkPlugins/group/flannel/Start 85.24
301 TestNetworkPlugins/group/calico/ControllerPod 6.01
302 TestNetworkPlugins/group/calico/KubeletFlags 0.22
303 TestNetworkPlugins/group/calico/NetCatPod 12.25
304 TestNetworkPlugins/group/calico/DNS 0.21
305 TestNetworkPlugins/group/calico/Localhost 0.17
306 TestNetworkPlugins/group/calico/HairPin 0.17
307 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
308 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
309 TestNetworkPlugins/group/custom-flannel/DNS 0.19
310 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
311 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
312 TestNetworkPlugins/group/bridge/Start 92.08
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.96
315 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
316 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
317 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
321 TestStartStop/group/no-preload/serial/FirstStart 91.61
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
324 TestNetworkPlugins/group/flannel/NetCatPod 10.22
325 TestNetworkPlugins/group/flannel/DNS 0.19
326 TestNetworkPlugins/group/flannel/Localhost 0.17
327 TestNetworkPlugins/group/flannel/HairPin 0.16
329 TestStartStop/group/embed-certs/serial/FirstStart 64.24
330 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
331 TestNetworkPlugins/group/bridge/NetCatPod 11.24
332 TestNetworkPlugins/group/bridge/DNS 0.17
333 TestNetworkPlugins/group/bridge/Localhost 0.15
334 TestNetworkPlugins/group/bridge/HairPin 0.13
336 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.15
337 TestStartStop/group/no-preload/serial/DeployApp 11.3
338 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
340 TestStartStop/group/embed-certs/serial/DeployApp 9.28
341 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
343 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
347 TestStartStop/group/no-preload/serial/SecondStart 672.02
350 TestStartStop/group/embed-certs/serial/SecondStart 595.07
353 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 543.94
354 TestStartStop/group/old-k8s-version/serial/Stop 2.28
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
366 TestStartStop/group/newest-cni/serial/FirstStart 43.58
367 TestStartStop/group/newest-cni/serial/DeployApp 0
368 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
369 TestStartStop/group/newest-cni/serial/Stop 7.33
370 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
371 TestStartStop/group/newest-cni/serial/SecondStart 35.76
372 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
375 TestStartStop/group/newest-cni/serial/Pause 2.28
x
+
TestDownloadOnly/v1.20.0/json-events (28.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-366415 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-366415 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (28.670297922s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (28.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-366415
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-366415: exit status 85 (54.726086ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-366415 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |          |
	|         | -p download-only-366415        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:27.352643   20271 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:27.352910   20271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:27.352921   20271 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:27.352928   20271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:27.353115   20271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	W0829 18:05:27.353258   20271 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19531-13056/.minikube/config/config.json: open /home/jenkins/minikube-integration/19531-13056/.minikube/config/config.json: no such file or directory
	I0829 18:05:27.353840   20271 out.go:352] Setting JSON to true
	I0829 18:05:27.354776   20271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2874,"bootTime":1724951853,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:27.354843   20271 start.go:139] virtualization: kvm guest
	I0829 18:05:27.356912   20271 out.go:97] [download-only-366415] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0829 18:05:27.357014   20271 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:05:27.357053   20271 notify.go:220] Checking for updates...
	I0829 18:05:27.358116   20271 out.go:169] MINIKUBE_LOCATION=19531
	I0829 18:05:27.359226   20271 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:27.360527   20271 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:05:27.361636   20271 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:05:27.362875   20271 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0829 18:05:27.365045   20271 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 18:05:27.365240   20271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:27.460164   20271 out.go:97] Using the kvm2 driver based on user configuration
	I0829 18:05:27.460200   20271 start.go:297] selected driver: kvm2
	I0829 18:05:27.460207   20271 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:05:27.460566   20271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:05:27.460684   20271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:05:27.475659   20271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:05:27.475714   20271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:05:27.476167   20271 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0829 18:05:27.476334   20271 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 18:05:27.476404   20271 cni.go:84] Creating CNI manager for ""
	I0829 18:05:27.476420   20271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:05:27.476433   20271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:05:27.476500   20271 start.go:340] cluster config:
	{Name:download-only-366415 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-366415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:27.476689   20271 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:05:27.478673   20271 out.go:97] Downloading VM boot image ...
	I0829 18:05:27.478715   20271 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19531-13056/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0829 18:05:41.356621   20271 out.go:97] Starting "download-only-366415" primary control-plane node in "download-only-366415" cluster
	I0829 18:05:41.356650   20271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 18:05:41.455645   20271 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:05:41.455687   20271 cache.go:56] Caching tarball of preloaded images
	I0829 18:05:41.455847   20271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0829 18:05:41.457555   20271 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0829 18:05:41.457577   20271 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0829 18:05:41.565720   20271 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-366415 host does not exist
	  To start a cluster, run: "minikube start -p download-only-366415"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-366415
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (15.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-105926 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-105926 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.955799167s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (15.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-105926
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-105926: exit status 85 (56.33319ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-366415 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-366415        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-366415        | download-only-366415 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | -o=json --download-only        | download-only-105926 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-105926        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:56.328499   20541 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:56.328612   20541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:56.328621   20541 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:56.328624   20541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:56.328846   20541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:05:56.329440   20541 out.go:352] Setting JSON to true
	I0829 18:05:56.330291   20541 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2903,"bootTime":1724951853,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:56.330372   20541 start.go:139] virtualization: kvm guest
	I0829 18:05:56.332258   20541 out.go:97] [download-only-105926] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:05:56.332416   20541 notify.go:220] Checking for updates...
	I0829 18:05:56.333591   20541 out.go:169] MINIKUBE_LOCATION=19531
	I0829 18:05:56.334746   20541 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:56.336063   20541 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:05:56.337068   20541 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:05:56.338067   20541 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0829 18:05:56.340137   20541 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 18:05:56.340359   20541 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:56.371929   20541 out.go:97] Using the kvm2 driver based on user configuration
	I0829 18:05:56.371964   20541 start.go:297] selected driver: kvm2
	I0829 18:05:56.371972   20541 start.go:901] validating driver "kvm2" against <nil>
	I0829 18:05:56.372271   20541 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:05:56.372354   20541 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19531-13056/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0829 18:05:56.387370   20541 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0829 18:05:56.387420   20541 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:05:56.387889   20541 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0829 18:05:56.388019   20541 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 18:05:56.388050   20541 cni.go:84] Creating CNI manager for ""
	I0829 18:05:56.388057   20541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0829 18:05:56.388069   20541 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:05:56.388110   20541 start.go:340] cluster config:
	{Name:download-only-105926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-105926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:56.388198   20541 iso.go:125] acquiring lock: {Name:mk2cdea98b92f02678fa274ca90f4ac4deebad66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:05:56.389721   20541 out.go:97] Starting "download-only-105926" primary control-plane node in "download-only-105926" cluster
	I0829 18:05:56.389743   20541 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:05:56.529953   20541 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:05:56.529989   20541 cache.go:56] Caching tarball of preloaded images
	I0829 18:05:56.530145   20541 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0829 18:05:56.531636   20541 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0829 18:05:56.531648   20541 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0829 18:05:56.639227   20541 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0829 18:06:10.679379   20541 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0829 18:06:10.679468   20541 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19531-13056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-105926 host does not exist
	  To start a cluster, run: "minikube start -p download-only-105926"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-105926
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.95s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-728877 --alsologtostderr --binary-mirror http://127.0.0.1:38491 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-728877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-728877
--- PASS: TestBinaryMirror (0.95s)

                                                
                                    
x
+
TestOffline (79.79s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-054827 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-054827 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m18.779693174s)
helpers_test.go:175: Cleaning up "offline-crio-054827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-054827
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-054827: (1.011580292s)
--- PASS: TestOffline (79.79s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-647117
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-647117: exit status 85 (207.586166ms)

                                                
                                                
-- stdout --
	* Profile "addons-647117" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-647117"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-647117
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-647117: exit status 85 (206.967372ms)

                                                
                                                
-- stdout --
	* Profile "addons-647117" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-647117"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (132.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-647117 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-647117 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m12.429905378s)
--- PASS: TestAddons/Setup (132.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-647117 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-647117 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.18s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-n82kn" [40e746b1-473d-47aa-96bb-9c8d8bec2439] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004125725s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-647117
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-647117: (6.174466117s)
--- PASS: TestAddons/parallel/InspektorGadget (12.18s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.58s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.37196ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-bz7cs" [29de8757-9c38-4526-a266-586cd80d8d3b] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003194261s
addons_test.go:475: (dbg) Run:  kubectl --context addons-647117 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-647117 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.038109056s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.58s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.162983ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-647117 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-647117 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8be0a182-00db-4bba-a04c-cef5cd9d26dc] Pending
helpers_test.go:344: "task-pv-pod" [8be0a182-00db-4bba-a04c-cef5cd9d26dc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8be0a182-00db-4bba-a04c-cef5cd9d26dc] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005901291s
addons_test.go:590: (dbg) Run:  kubectl --context addons-647117 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-647117 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-647117 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-647117 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-647117 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-647117 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-647117 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [56d38e12-dbe4-496e-83c5-8cd1d7bd1c84] Pending
helpers_test.go:344: "task-pv-pod-restore" [56d38e12-dbe4-496e-83c5-8cd1d7bd1c84] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [56d38e12-dbe4-496e-83c5-8cd1d7bd1c84] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004236767s
addons_test.go:632: (dbg) Run:  kubectl --context addons-647117 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-647117 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-647117 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-647117 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.706933166s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-647117 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-jmjhc" [9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-jmjhc" [9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-jmjhc" [9ddb72c4-9529-4033-bdfc-cf38dbdb6a4b] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004607432s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (12.22s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-s9pkv" [b77b7249-4dc6-450f-a5e6-f843096a954d] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003584642s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-647117
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.28s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-647117 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-647117 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-647117 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b9fa9d03-fb2e-451e-a9a6-0b22782a8629] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b9fa9d03-fb2e-451e-a9a6-0b22782a8629] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b9fa9d03-fb2e-451e-a9a6-0b22782a8629] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004681281s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-647117 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 ssh "cat /opt/local-path-provisioner/pvc-802ad026-bf20-44ed-8a63-3b8e6e455a85_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-647117 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-647117 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-647117 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.496232284s)
--- PASS: TestAddons/parallel/LocalPath (58.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dlhxf" [ed192022-4f02-4de0-98b0-3c54ba3a49e6] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004899773s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-647117
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-lsprz" [8a0668ba-3507-40e5-bd22-584a4f90cb67] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003780233s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-647117 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-647117 addons disable yakd --alsologtostderr -v=1: (5.870055934s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-647117
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-647117: (7.28053159s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-647117
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-647117
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-647117
--- PASS: TestAddons/StoppedEnableDisable (7.54s)

                                                
                                    
x
+
TestCertOptions (75.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-034564 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-034564 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m14.394698888s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-034564 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-034564 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-034564 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-034564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-034564
--- PASS: TestCertOptions (75.61s)

                                                
                                    
x
+
TestCertExpiration (300.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-492436 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0829 19:19:32.701021   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-492436 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m8.180581277s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-492436 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0829 19:23:26.706697   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-492436 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (51.199852847s)
helpers_test.go:175: Cleaning up "cert-expiration-492436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-492436
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-492436: (1.165012471s)
--- PASS: TestCertExpiration (300.55s)

                                                
                                    
x
+
TestForceSystemdFlag (67.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-523972 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-523972 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m5.947530904s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-523972 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-523972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-523972
--- PASS: TestForceSystemdFlag (67.12s)

                                                
                                    
x
+
TestForceSystemdEnv (43.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-935984 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-935984 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (41.837917785s)
helpers_test.go:175: Cleaning up "force-systemd-env-935984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-935984
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-935984: (1.208549034s)
--- PASS: TestForceSystemdEnv (43.05s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0829 19:19:49.633478   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestKVMDriverInstallOrUpdate (5.00s)

                                                
                                    
x
+
TestErrorSpam/setup (37.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-508032 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-508032 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-508032 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-508032 --driver=kvm2  --container-runtime=crio: (37.351240519s)
--- PASS: TestErrorSpam/setup (37.35s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (5.96s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 stop: (2.280937554s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 stop: (1.93835227s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-508032 --log_dir /tmp/nospam-508032 stop: (1.74169467s)
--- PASS: TestErrorSpam/stop (5.96s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19531-13056/.minikube/files/etc/test/nested/copy/20259/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-024872 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-024872 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (52.810771178s)
--- PASS: TestFunctional/serial/StartWithProxy (52.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-024872 --alsologtostderr -v=8
E0829 18:23:26.706137   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:26.712839   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:26.724186   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:26.745641   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:26.787019   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:26.868518   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:27.029999   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:27.351747   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:27.994068   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:29.275428   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:31.836918   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:36.958902   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:23:47.200795   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-024872 --alsologtostderr -v=8: (39.560953197s)
functional_test.go:663: soft start took 39.561632189s for "functional-024872" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-024872 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 cache add registry.k8s.io/pause:3.1: (1.36525198s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 cache add registry.k8s.io/pause:3.3: (1.346507945s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 cache add registry.k8s.io/pause:latest: (1.240393646s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-024872 /tmp/TestFunctionalserialCacheCmdcacheadd_local3147066977/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 cache add minikube-local-cache-test:functional-024872
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 cache add minikube-local-cache-test:functional-024872: (1.776416497s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 cache delete minikube-local-cache-test:functional-024872
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-024872
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh sudo crictl rmi registry.k8s.io/pause:latest
E0829 18:24:07.683060   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024872 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (202.385271ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 cache reload: (1.032206012s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 kubectl -- --context functional-024872 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-024872 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-024872 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-024872 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.844083593s)
functional_test.go:761: restart took 32.844195641s for "functional-024872" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-024872 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 logs: (1.394450136s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 logs --file /tmp/TestFunctionalserialLogsFileCmd3443846542/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 logs --file /tmp/TestFunctionalserialLogsFileCmd3443846542/001/logs.txt: (1.378409281s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-024872 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-024872
E0829 18:24:48.644794   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-024872: exit status 115 (265.744263ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.12:31465 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-024872 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024872 config get cpus: exit status 14 (44.852955ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024872 config get cpus: exit status 14 (42.197766ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-024872 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-024872 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 29585: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-024872 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-024872 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (145.181821ms)

                                                
                                                
-- stdout --
	* [functional-024872] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:24:50.405958   29328 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:24:50.406290   29328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:24:50.406317   29328 out.go:358] Setting ErrFile to fd 2...
	I0829 18:24:50.406325   29328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:24:50.406628   29328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:24:50.407266   29328 out.go:352] Setting JSON to false
	I0829 18:24:50.408435   29328 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4037,"bootTime":1724951853,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:24:50.408514   29328 start.go:139] virtualization: kvm guest
	I0829 18:24:50.410546   29328 out.go:177] * [functional-024872] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:24:50.411914   29328 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:24:50.411927   29328 notify.go:220] Checking for updates...
	I0829 18:24:50.413522   29328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:24:50.414913   29328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:24:50.416178   29328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:24:50.417602   29328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:24:50.418938   29328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:24:50.420461   29328 config.go:182] Loaded profile config "functional-024872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:24:50.420862   29328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:24:50.420928   29328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:24:50.439061   29328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34367
	I0829 18:24:50.439475   29328 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:24:50.440001   29328 main.go:141] libmachine: Using API Version  1
	I0829 18:24:50.440019   29328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:24:50.440378   29328 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:24:50.440558   29328 main.go:141] libmachine: (functional-024872) Calling .DriverName
	I0829 18:24:50.440801   29328 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:24:50.441240   29328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:24:50.441277   29328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:24:50.456105   29328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39177
	I0829 18:24:50.456516   29328 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:24:50.457029   29328 main.go:141] libmachine: Using API Version  1
	I0829 18:24:50.457051   29328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:24:50.457352   29328 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:24:50.457591   29328 main.go:141] libmachine: (functional-024872) Calling .DriverName
	I0829 18:24:50.495080   29328 out.go:177] * Using the kvm2 driver based on existing profile
	I0829 18:24:50.495984   29328 start.go:297] selected driver: kvm2
	I0829 18:24:50.495997   29328 start.go:901] validating driver "kvm2" against &{Name:functional-024872 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-024872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:24:50.496095   29328 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:24:50.497988   29328 out.go:201] 
	W0829 18:24:50.499274   29328 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0829 18:24:50.500467   29328 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-024872 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-024872 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-024872 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.090501ms)

                                                
                                                
-- stdout --
	* [functional-024872] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:24:50.265475   29272 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:24:50.265646   29272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:24:50.265666   29272 out.go:358] Setting ErrFile to fd 2...
	I0829 18:24:50.265686   29272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:24:50.265993   29272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:24:50.266524   29272 out.go:352] Setting JSON to false
	I0829 18:24:50.267405   29272 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4037,"bootTime":1724951853,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:24:50.267495   29272 start.go:139] virtualization: kvm guest
	I0829 18:24:50.269795   29272 out.go:177] * [functional-024872] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0829 18:24:50.271158   29272 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:24:50.271243   29272 notify.go:220] Checking for updates...
	I0829 18:24:50.273541   29272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:24:50.274798   29272 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 18:24:50.275871   29272 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 18:24:50.277129   29272 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:24:50.278391   29272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:24:50.279991   29272 config.go:182] Loaded profile config "functional-024872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:24:50.280577   29272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:24:50.280632   29272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:24:50.296017   29272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
	I0829 18:24:50.296602   29272 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:24:50.297213   29272 main.go:141] libmachine: Using API Version  1
	I0829 18:24:50.297237   29272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:24:50.297567   29272 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:24:50.297735   29272 main.go:141] libmachine: (functional-024872) Calling .DriverName
	I0829 18:24:50.297972   29272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:24:50.298302   29272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:24:50.298376   29272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:24:50.313520   29272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38875
	I0829 18:24:50.313945   29272 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:24:50.314450   29272 main.go:141] libmachine: Using API Version  1
	I0829 18:24:50.314472   29272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:24:50.314793   29272 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:24:50.315188   29272 main.go:141] libmachine: (functional-024872) Calling .DriverName
	I0829 18:24:50.349632   29272 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0829 18:24:50.351050   29272 start.go:297] selected driver: kvm2
	I0829 18:24:50.351067   29272 start.go:901] validating driver "kvm2" against &{Name:functional-024872 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-024872 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:24:50.351217   29272 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:24:50.353399   29272 out.go:201] 
	W0829 18:24:50.354659   29272 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0829 18:24:50.356004   29272 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (20.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-024872 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-024872 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-kkdfn" [680c1866-413d-4661-a037-f1fc1c5afd40] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
2024/08/29 18:25:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-kkdfn" [680c1866-413d-4661-a037-f1fc1c5afd40] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.004747396s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.12:31526
functional_test.go:1675: http://192.168.39.12:31526: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-kkdfn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.12:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.12:31526
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (20.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e75ad529-4128-463f-a12e-801883c95414] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004573615s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-024872 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-024872 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-024872 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-024872 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-024872 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [78b0d764-f560-45e9-a997-cb38142f2811] Pending
helpers_test.go:344: "sp-pod" [78b0d764-f560-45e9-a997-cb38142f2811] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [78b0d764-f560-45e9-a997-cb38142f2811] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.003574911s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-024872 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-024872 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-024872 delete -f testdata/storage-provisioner/pod.yaml: (3.320841129s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-024872 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [697d8e41-82b9-4788-a306-3840730b7271] Pending
helpers_test.go:344: "sp-pod" [697d8e41-82b9-4788-a306-3840730b7271] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [697d8e41-82b9-4788-a306-3840730b7271] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004359736s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-024872 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.59s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh -n functional-024872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 cp functional-024872:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2052748064/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh -n functional-024872 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh -n functional-024872 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-024872 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-rldbd" [c8799bed-a82d-4499-9492-bf4672a19ca4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-rldbd" [c8799bed-a82d-4499-9492-bf4672a19ca4] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.136614796s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-024872 exec mysql-6cdb49bbb-rldbd -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-024872 exec mysql-6cdb49bbb-rldbd -- mysql -ppassword -e "show databases;": exit status 1 (119.43076ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-024872 exec mysql-6cdb49bbb-rldbd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/20259/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "sudo cat /etc/test/nested/copy/20259/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/20259.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "sudo cat /etc/ssl/certs/20259.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/20259.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "sudo cat /usr/share/ca-certificates/20259.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/202592.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "sudo cat /etc/ssl/certs/202592.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/202592.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "sudo cat /usr/share/ca-certificates/202592.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-024872 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024872 ssh "sudo systemctl is-active docker": exit status 1 (209.767023ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024872 ssh "sudo systemctl is-active containerd": exit status 1 (198.684552ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-024872 /tmp/TestFunctionalparallelMountCmdany-port3931822684/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724955889630914739" to /tmp/TestFunctionalparallelMountCmdany-port3931822684/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724955889630914739" to /tmp/TestFunctionalparallelMountCmdany-port3931822684/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724955889630914739" to /tmp/TestFunctionalparallelMountCmdany-port3931822684/001/test-1724955889630914739
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024872 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (203.881045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 29 18:24 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 29 18:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 29 18:24 test-1724955889630914739
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh cat /mount-9p/test-1724955889630914739
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-024872 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ae31a552-483f-4ad3-8810-63e4f83b90ee] Pending
helpers_test.go:344: "busybox-mount" [ae31a552-483f-4ad3-8810-63e4f83b90ee] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ae31a552-483f-4ad3-8810-63e4f83b90ee] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ae31a552-483f-4ad3-8810-63e4f83b90ee] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.003727899s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-024872 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-024872 /tmp/TestFunctionalparallelMountCmdany-port3931822684/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-024872 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-024872
localhost/kicbase/echo-server:functional-024872
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-024872 image ls --format short --alsologtostderr:
I0829 18:25:23.825239   31344 out.go:345] Setting OutFile to fd 1 ...
I0829 18:25:23.825357   31344 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:23.825364   31344 out.go:358] Setting ErrFile to fd 2...
I0829 18:25:23.825369   31344 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:23.825558   31344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
I0829 18:25:23.826078   31344 config.go:182] Loaded profile config "functional-024872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:25:23.826190   31344 config.go:182] Loaded profile config "functional-024872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:25:23.826551   31344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 18:25:23.826592   31344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:25:23.841566   31344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42559
I0829 18:25:23.842128   31344 main.go:141] libmachine: () Calling .GetVersion
I0829 18:25:23.842828   31344 main.go:141] libmachine: Using API Version  1
I0829 18:25:23.842856   31344 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:25:23.843185   31344 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:25:23.843344   31344 main.go:141] libmachine: (functional-024872) Calling .GetState
I0829 18:25:23.845358   31344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 18:25:23.845400   31344 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:25:23.859961   31344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39147
I0829 18:25:23.860401   31344 main.go:141] libmachine: () Calling .GetVersion
I0829 18:25:23.860899   31344 main.go:141] libmachine: Using API Version  1
I0829 18:25:23.860918   31344 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:25:23.861203   31344 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:25:23.861370   31344 main.go:141] libmachine: (functional-024872) Calling .DriverName
I0829 18:25:23.861546   31344 ssh_runner.go:195] Run: systemctl --version
I0829 18:25:23.861573   31344 main.go:141] libmachine: (functional-024872) Calling .GetSSHHostname
I0829 18:25:23.864501   31344 main.go:141] libmachine: (functional-024872) DBG | domain functional-024872 has defined MAC address 52:54:00:db:80:21 in network mk-functional-024872
I0829 18:25:23.864884   31344 main.go:141] libmachine: (functional-024872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:21", ip: ""} in network mk-functional-024872: {Iface:virbr1 ExpiryTime:2024-08-29 19:22:42 +0000 UTC Type:0 Mac:52:54:00:db:80:21 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-024872 Clientid:01:52:54:00:db:80:21}
I0829 18:25:23.864912   31344 main.go:141] libmachine: (functional-024872) DBG | domain functional-024872 has defined IP address 192.168.39.12 and MAC address 52:54:00:db:80:21 in network mk-functional-024872
I0829 18:25:23.865055   31344 main.go:141] libmachine: (functional-024872) Calling .GetSSHPort
I0829 18:25:23.865250   31344 main.go:141] libmachine: (functional-024872) Calling .GetSSHKeyPath
I0829 18:25:23.865416   31344 main.go:141] libmachine: (functional-024872) Calling .GetSSHUsername
I0829 18:25:23.865569   31344 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/functional-024872/id_rsa Username:docker}
I0829 18:25:23.997619   31344 ssh_runner.go:195] Run: sudo crictl images --output json
I0829 18:25:24.078596   31344 main.go:141] libmachine: Making call to close driver server
I0829 18:25:24.078619   31344 main.go:141] libmachine: (functional-024872) Calling .Close
I0829 18:25:24.078929   31344 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:25:24.078958   31344 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 18:25:24.078983   31344 main.go:141] libmachine: Making call to close driver server
I0829 18:25:24.078989   31344 main.go:141] libmachine: (functional-024872) DBG | Closing plugin on server side
I0829 18:25:24.078993   31344 main.go:141] libmachine: (functional-024872) Calling .Close
I0829 18:25:24.079253   31344 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:25:24.079271   31344 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-024872 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/minikube-local-cache-test     | functional-024872  | 92b8a70334915 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-024872  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-024872 image ls --format table --alsologtostderr:
I0829 18:25:26.401844   31597 out.go:345] Setting OutFile to fd 1 ...
I0829 18:25:26.402126   31597 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:26.402137   31597 out.go:358] Setting ErrFile to fd 2...
I0829 18:25:26.402143   31597 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:26.402336   31597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
I0829 18:25:26.402903   31597 config.go:182] Loaded profile config "functional-024872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:25:26.403013   31597 config.go:182] Loaded profile config "functional-024872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:25:26.403380   31597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 18:25:26.403434   31597 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:25:26.417859   31597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
I0829 18:25:26.418297   31597 main.go:141] libmachine: () Calling .GetVersion
I0829 18:25:26.418818   31597 main.go:141] libmachine: Using API Version  1
I0829 18:25:26.418837   31597 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:25:26.419176   31597 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:25:26.419350   31597 main.go:141] libmachine: (functional-024872) Calling .GetState
I0829 18:25:26.421210   31597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 18:25:26.421254   31597 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:25:26.435748   31597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38945
I0829 18:25:26.436191   31597 main.go:141] libmachine: () Calling .GetVersion
I0829 18:25:26.436688   31597 main.go:141] libmachine: Using API Version  1
I0829 18:25:26.436715   31597 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:25:26.437057   31597 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:25:26.437261   31597 main.go:141] libmachine: (functional-024872) Calling .DriverName
I0829 18:25:26.437466   31597 ssh_runner.go:195] Run: systemctl --version
I0829 18:25:26.437485   31597 main.go:141] libmachine: (functional-024872) Calling .GetSSHHostname
I0829 18:25:26.440114   31597 main.go:141] libmachine: (functional-024872) DBG | domain functional-024872 has defined MAC address 52:54:00:db:80:21 in network mk-functional-024872
I0829 18:25:26.440526   31597 main.go:141] libmachine: (functional-024872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:21", ip: ""} in network mk-functional-024872: {Iface:virbr1 ExpiryTime:2024-08-29 19:22:42 +0000 UTC Type:0 Mac:52:54:00:db:80:21 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-024872 Clientid:01:52:54:00:db:80:21}
I0829 18:25:26.440560   31597 main.go:141] libmachine: (functional-024872) DBG | domain functional-024872 has defined IP address 192.168.39.12 and MAC address 52:54:00:db:80:21 in network mk-functional-024872
I0829 18:25:26.440710   31597 main.go:141] libmachine: (functional-024872) Calling .GetSSHPort
I0829 18:25:26.440883   31597 main.go:141] libmachine: (functional-024872) Calling .GetSSHKeyPath
I0829 18:25:26.441024   31597 main.go:141] libmachine: (functional-024872) Calling .GetSSHUsername
I0829 18:25:26.441167   31597 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/functional-024872/id_rsa Username:docker}
I0829 18:25:26.561894   31597 ssh_runner.go:195] Run: sudo crictl images --output json
I0829 18:25:26.620325   31597 main.go:141] libmachine: Making call to close driver server
I0829 18:25:26.620348   31597 main.go:141] libmachine: (functional-024872) Calling .Close
I0829 18:25:26.620609   31597 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:25:26.620641   31597 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 18:25:26.620648   31597 main.go:141] libmachine: (functional-024872) DBG | Closing plugin on server side
I0829 18:25:26.620650   31597 main.go:141] libmachine: Making call to close driver server
I0829 18:25:26.620659   31597 main.go:141] libmachine: (functional-024872) Calling .Close
I0829 18:25:26.620889   31597 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:25:26.620901   31597 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-024872 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"92b8a70334915941c47c0463987bb9953392d211f96f29c5729db6e4a6dd6978","repoDigests":["localhost/minikube-local-cache-test@sha256:9ff118510940d8e9905e750186a4bf8c19d9a4f5e8babd2225cfbd83b57578f1"],"repoTags":["localhost/minikube-local-cache-test:functional-024872"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/
coredns:v1.11.1"],"size":"61245718"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"1766f54c897f0e57040741
e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","re
gistry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c641
3dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-024872"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda2
09e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/
pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-024872 image ls --format json --alsologtostderr:
I0829 18:25:26.127651   31573 out.go:345] Setting OutFile to fd 1 ...
I0829 18:25:26.127780   31573 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:26.127791   31573 out.go:358] Setting ErrFile to fd 2...
I0829 18:25:26.127797   31573 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:26.128079   31573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
I0829 18:25:26.128871   31573 config.go:182] Loaded profile config "functional-024872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:25:26.129031   31573 config.go:182] Loaded profile config "functional-024872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:25:26.129644   31573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 18:25:26.129700   31573 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:25:26.144276   31573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44641
I0829 18:25:26.144773   31573 main.go:141] libmachine: () Calling .GetVersion
I0829 18:25:26.145341   31573 main.go:141] libmachine: Using API Version  1
I0829 18:25:26.145360   31573 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:25:26.145684   31573 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:25:26.145865   31573 main.go:141] libmachine: (functional-024872) Calling .GetState
I0829 18:25:26.147997   31573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 18:25:26.148040   31573 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:25:26.162306   31573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
I0829 18:25:26.162715   31573 main.go:141] libmachine: () Calling .GetVersion
I0829 18:25:26.163197   31573 main.go:141] libmachine: Using API Version  1
I0829 18:25:26.163224   31573 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:25:26.163527   31573 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:25:26.163725   31573 main.go:141] libmachine: (functional-024872) Calling .DriverName
I0829 18:25:26.163930   31573 ssh_runner.go:195] Run: systemctl --version
I0829 18:25:26.163964   31573 main.go:141] libmachine: (functional-024872) Calling .GetSSHHostname
I0829 18:25:26.166552   31573 main.go:141] libmachine: (functional-024872) DBG | domain functional-024872 has defined MAC address 52:54:00:db:80:21 in network mk-functional-024872
I0829 18:25:26.166972   31573 main.go:141] libmachine: (functional-024872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:21", ip: ""} in network mk-functional-024872: {Iface:virbr1 ExpiryTime:2024-08-29 19:22:42 +0000 UTC Type:0 Mac:52:54:00:db:80:21 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-024872 Clientid:01:52:54:00:db:80:21}
I0829 18:25:26.167007   31573 main.go:141] libmachine: (functional-024872) DBG | domain functional-024872 has defined IP address 192.168.39.12 and MAC address 52:54:00:db:80:21 in network mk-functional-024872
I0829 18:25:26.167094   31573 main.go:141] libmachine: (functional-024872) Calling .GetSSHPort
I0829 18:25:26.167330   31573 main.go:141] libmachine: (functional-024872) Calling .GetSSHKeyPath
I0829 18:25:26.167487   31573 main.go:141] libmachine: (functional-024872) Calling .GetSSHUsername
I0829 18:25:26.167613   31573 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/functional-024872/id_rsa Username:docker}
I0829 18:25:26.285062   31573 ssh_runner.go:195] Run: sudo crictl images --output json
I0829 18:25:26.356635   31573 main.go:141] libmachine: Making call to close driver server
I0829 18:25:26.356651   31573 main.go:141] libmachine: (functional-024872) Calling .Close
I0829 18:25:26.356926   31573 main.go:141] libmachine: (functional-024872) DBG | Closing plugin on server side
I0829 18:25:26.356981   31573 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:25:26.356991   31573 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 18:25:26.357004   31573 main.go:141] libmachine: Making call to close driver server
I0829 18:25:26.357012   31573 main.go:141] libmachine: (functional-024872) Calling .Close
I0829 18:25:26.357303   31573 main.go:141] libmachine: (functional-024872) DBG | Closing plugin on server side
I0829 18:25:26.357395   31573 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:25:26.357434   31573 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-024872 image ls --format yaml --alsologtostderr:
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-024872
size: "4943877"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 92b8a70334915941c47c0463987bb9953392d211f96f29c5729db6e4a6dd6978
repoDigests:
- localhost/minikube-local-cache-test@sha256:9ff118510940d8e9905e750186a4bf8c19d9a4f5e8babd2225cfbd83b57578f1
repoTags:
- localhost/minikube-local-cache-test:functional-024872
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-024872 image ls --format yaml --alsologtostderr:
I0829 18:25:24.129571   31374 out.go:345] Setting OutFile to fd 1 ...
I0829 18:25:24.129769   31374 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:24.129781   31374 out.go:358] Setting ErrFile to fd 2...
I0829 18:25:24.129795   31374 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:24.130079   31374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
I0829 18:25:24.130721   31374 config.go:182] Loaded profile config "functional-024872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:25:24.130815   31374 config.go:182] Loaded profile config "functional-024872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:25:24.131188   31374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 18:25:24.131236   31374 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:25:24.150222   31374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
I0829 18:25:24.150756   31374 main.go:141] libmachine: () Calling .GetVersion
I0829 18:25:24.151257   31374 main.go:141] libmachine: Using API Version  1
I0829 18:25:24.151272   31374 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:25:24.151682   31374 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:25:24.151917   31374 main.go:141] libmachine: (functional-024872) Calling .GetState
I0829 18:25:24.156883   31374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 18:25:24.156937   31374 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:25:24.173468   31374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44329
I0829 18:25:24.173978   31374 main.go:141] libmachine: () Calling .GetVersion
I0829 18:25:24.174550   31374 main.go:141] libmachine: Using API Version  1
I0829 18:25:24.174575   31374 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:25:24.175030   31374 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:25:24.175237   31374 main.go:141] libmachine: (functional-024872) Calling .DriverName
I0829 18:25:24.175462   31374 ssh_runner.go:195] Run: systemctl --version
I0829 18:25:24.175489   31374 main.go:141] libmachine: (functional-024872) Calling .GetSSHHostname
I0829 18:25:24.178806   31374 main.go:141] libmachine: (functional-024872) DBG | domain functional-024872 has defined MAC address 52:54:00:db:80:21 in network mk-functional-024872
I0829 18:25:24.179247   31374 main.go:141] libmachine: (functional-024872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:21", ip: ""} in network mk-functional-024872: {Iface:virbr1 ExpiryTime:2024-08-29 19:22:42 +0000 UTC Type:0 Mac:52:54:00:db:80:21 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-024872 Clientid:01:52:54:00:db:80:21}
I0829 18:25:24.179269   31374 main.go:141] libmachine: (functional-024872) DBG | domain functional-024872 has defined IP address 192.168.39.12 and MAC address 52:54:00:db:80:21 in network mk-functional-024872
I0829 18:25:24.179404   31374 main.go:141] libmachine: (functional-024872) Calling .GetSSHPort
I0829 18:25:24.179540   31374 main.go:141] libmachine: (functional-024872) Calling .GetSSHKeyPath
I0829 18:25:24.179676   31374 main.go:141] libmachine: (functional-024872) Calling .GetSSHUsername
I0829 18:25:24.179818   31374 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/functional-024872/id_rsa Username:docker}
I0829 18:25:24.284079   31374 ssh_runner.go:195] Run: sudo crictl images --output json
I0829 18:25:24.332978   31374 main.go:141] libmachine: Making call to close driver server
I0829 18:25:24.332995   31374 main.go:141] libmachine: (functional-024872) Calling .Close
I0829 18:25:24.333250   31374 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:25:24.333270   31374 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 18:25:24.333286   31374 main.go:141] libmachine: Making call to close driver server
I0829 18:25:24.333296   31374 main.go:141] libmachine: (functional-024872) Calling .Close
I0829 18:25:24.333579   31374 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:25:24.333595   31374 main.go:141] libmachine: (functional-024872) DBG | Closing plugin on server side
I0829 18:25:24.333598   31374 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024872 ssh pgrep buildkitd: exit status 1 (204.940082ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image build -t localhost/my-image:functional-024872 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 image build -t localhost/my-image:functional-024872 testdata/build --alsologtostderr: (6.225987422s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-024872 image build -t localhost/my-image:functional-024872 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7159c8327d8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-024872
--> 37f27483df8
Successfully tagged localhost/my-image:functional-024872
37f27483df8c868d11b299a95dbc3d90c38d272d4c158b709106bd736423f4b7
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-024872 image build -t localhost/my-image:functional-024872 testdata/build --alsologtostderr:
I0829 18:25:24.584322   31427 out.go:345] Setting OutFile to fd 1 ...
I0829 18:25:24.584591   31427 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:24.584601   31427 out.go:358] Setting ErrFile to fd 2...
I0829 18:25:24.584605   31427 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:25:24.584762   31427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
I0829 18:25:24.585271   31427 config.go:182] Loaded profile config "functional-024872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:25:24.585784   31427 config.go:182] Loaded profile config "functional-024872": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0829 18:25:24.586192   31427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 18:25:24.586271   31427 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:25:24.600862   31427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36907
I0829 18:25:24.601324   31427 main.go:141] libmachine: () Calling .GetVersion
I0829 18:25:24.601810   31427 main.go:141] libmachine: Using API Version  1
I0829 18:25:24.601824   31427 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:25:24.602216   31427 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:25:24.602412   31427 main.go:141] libmachine: (functional-024872) Calling .GetState
I0829 18:25:24.604497   31427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0829 18:25:24.604538   31427 main.go:141] libmachine: Launching plugin server for driver kvm2
I0829 18:25:24.618832   31427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37825
I0829 18:25:24.619207   31427 main.go:141] libmachine: () Calling .GetVersion
I0829 18:25:24.619757   31427 main.go:141] libmachine: Using API Version  1
I0829 18:25:24.619782   31427 main.go:141] libmachine: () Calling .SetConfigRaw
I0829 18:25:24.620071   31427 main.go:141] libmachine: () Calling .GetMachineName
I0829 18:25:24.620243   31427 main.go:141] libmachine: (functional-024872) Calling .DriverName
I0829 18:25:24.620428   31427 ssh_runner.go:195] Run: systemctl --version
I0829 18:25:24.620459   31427 main.go:141] libmachine: (functional-024872) Calling .GetSSHHostname
I0829 18:25:24.623167   31427 main.go:141] libmachine: (functional-024872) DBG | domain functional-024872 has defined MAC address 52:54:00:db:80:21 in network mk-functional-024872
I0829 18:25:24.623577   31427 main.go:141] libmachine: (functional-024872) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:21", ip: ""} in network mk-functional-024872: {Iface:virbr1 ExpiryTime:2024-08-29 19:22:42 +0000 UTC Type:0 Mac:52:54:00:db:80:21 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-024872 Clientid:01:52:54:00:db:80:21}
I0829 18:25:24.623619   31427 main.go:141] libmachine: (functional-024872) DBG | domain functional-024872 has defined IP address 192.168.39.12 and MAC address 52:54:00:db:80:21 in network mk-functional-024872
I0829 18:25:24.623731   31427 main.go:141] libmachine: (functional-024872) Calling .GetSSHPort
I0829 18:25:24.623887   31427 main.go:141] libmachine: (functional-024872) Calling .GetSSHKeyPath
I0829 18:25:24.624061   31427 main.go:141] libmachine: (functional-024872) Calling .GetSSHUsername
I0829 18:25:24.624221   31427 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/functional-024872/id_rsa Username:docker}
I0829 18:25:24.711524   31427 build_images.go:161] Building image from path: /tmp/build.3298506175.tar
I0829 18:25:24.711606   31427 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0829 18:25:24.730543   31427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3298506175.tar
I0829 18:25:24.739041   31427 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3298506175.tar: stat -c "%s %y" /var/lib/minikube/build/build.3298506175.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3298506175.tar': No such file or directory
I0829 18:25:24.739068   31427 ssh_runner.go:362] scp /tmp/build.3298506175.tar --> /var/lib/minikube/build/build.3298506175.tar (3072 bytes)
I0829 18:25:24.777261   31427 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3298506175
I0829 18:25:24.792534   31427 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3298506175 -xf /var/lib/minikube/build/build.3298506175.tar
I0829 18:25:24.807554   31427 crio.go:315] Building image: /var/lib/minikube/build/build.3298506175
I0829 18:25:24.807641   31427 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-024872 /var/lib/minikube/build/build.3298506175 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0829 18:25:30.727030   31427 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-024872 /var/lib/minikube/build/build.3298506175 --cgroup-manager=cgroupfs: (5.919348227s)
I0829 18:25:30.727104   31427 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3298506175
I0829 18:25:30.749882   31427 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3298506175.tar
I0829 18:25:30.766119   31427 build_images.go:217] Built localhost/my-image:functional-024872 from /tmp/build.3298506175.tar
I0829 18:25:30.766162   31427 build_images.go:133] succeeded building to: functional-024872
I0829 18:25:30.766168   31427 build_images.go:134] failed building to: 
I0829 18:25:30.766195   31427 main.go:141] libmachine: Making call to close driver server
I0829 18:25:30.766213   31427 main.go:141] libmachine: (functional-024872) Calling .Close
I0829 18:25:30.766630   31427 main.go:141] libmachine: (functional-024872) DBG | Closing plugin on server side
I0829 18:25:30.766643   31427 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:25:30.766661   31427 main.go:141] libmachine: Making call to close connection to plugin binary
I0829 18:25:30.766678   31427 main.go:141] libmachine: Making call to close driver server
I0829 18:25:30.766686   31427 main.go:141] libmachine: (functional-024872) Calling .Close
I0829 18:25:30.766930   31427 main.go:141] libmachine: Successfully made call to close driver server
I0829 18:25:30.766944   31427 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.730585168s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-024872
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image load --daemon kicbase/echo-server:functional-024872 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 image load --daemon kicbase/echo-server:functional-024872 --alsologtostderr: (1.085166651s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image load --daemon kicbase/echo-server:functional-024872 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-024872
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image load --daemon kicbase/echo-server:functional-024872 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 image load --daemon kicbase/echo-server:functional-024872 --alsologtostderr: (2.669620341s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image save kicbase/echo-server:functional-024872 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image rm kicbase/echo-server:functional-024872 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-024872
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 image save --daemon kicbase/echo-server:functional-024872 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 image save --daemon kicbase/echo-server:functional-024872 --alsologtostderr: (1.001216286s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-024872
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (17.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-024872 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-024872 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-7c75v" [9c16a408-976f-436a-8310-5bfa1f0187a3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-7c75v" [9c16a408-976f-436a-8310-5bfa1f0187a3] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 17.003603363s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (17.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-024872 /tmp/TestFunctionalparallelMountCmdspecific-port2630658239/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024872 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.762201ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-024872 /tmp/TestFunctionalparallelMountCmdspecific-port2630658239/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-024872 ssh "sudo umount -f /mount-9p": exit status 1 (223.831946ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-024872 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-024872 /tmp/TestFunctionalparallelMountCmdspecific-port2630658239/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-024872 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3657320497/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-024872 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3657320497/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-024872 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3657320497/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-024872 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-024872 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3657320497/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-024872 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3657320497/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-024872 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3657320497/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "233.941495ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "47.53625ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "215.382583ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "41.145473ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 service list
functional_test.go:1459: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 service list: (1.34654666s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-024872 service list -o json: (1.282020003s)
functional_test.go:1494: Took "1.282130629s" to run "out/minikube-linux-amd64 -p functional-024872 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.12:32559
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-024872 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.12:32559
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-024872
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-024872
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-024872
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-782425 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0829 18:26:10.566138   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:26.706437   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-782425 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m14.96344295s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- rollout status deployment/busybox
E0829 18:28:54.407938   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-782425 -- rollout status deployment/busybox: (4.866574071s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-h8k94 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-rsqqv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-vwgrt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-h8k94 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-rsqqv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-vwgrt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-h8k94 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-rsqqv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-vwgrt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-h8k94 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-h8k94 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-rsqqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-rsqqv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-vwgrt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-782425 -- exec busybox-7dff88458-vwgrt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-782425 -v=7 --alsologtostderr
E0829 18:29:49.632226   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:49.638709   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:49.650159   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:49.671664   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:49.713102   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:49.794898   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:49.956505   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:50.278686   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:50.920424   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:52.202721   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:54.764491   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-782425 -v=7 --alsologtostderr: (56.527039602s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-782425 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status --output json -v=7 --alsologtostderr
E0829 18:29:59.886357   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp testdata/cp-test.txt ha-782425:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1158605446/001/cp-test_ha-782425.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425:/home/docker/cp-test.txt ha-782425-m02:/home/docker/cp-test_ha-782425_ha-782425-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m02 "sudo cat /home/docker/cp-test_ha-782425_ha-782425-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425:/home/docker/cp-test.txt ha-782425-m03:/home/docker/cp-test_ha-782425_ha-782425-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m03 "sudo cat /home/docker/cp-test_ha-782425_ha-782425-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425:/home/docker/cp-test.txt ha-782425-m04:/home/docker/cp-test_ha-782425_ha-782425-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m04 "sudo cat /home/docker/cp-test_ha-782425_ha-782425-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp testdata/cp-test.txt ha-782425-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1158605446/001/cp-test_ha-782425-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425-m02:/home/docker/cp-test.txt ha-782425:/home/docker/cp-test_ha-782425-m02_ha-782425.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425 "sudo cat /home/docker/cp-test_ha-782425-m02_ha-782425.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425-m02:/home/docker/cp-test.txt ha-782425-m03:/home/docker/cp-test_ha-782425-m02_ha-782425-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m03 "sudo cat /home/docker/cp-test_ha-782425-m02_ha-782425-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425-m02:/home/docker/cp-test.txt ha-782425-m04:/home/docker/cp-test_ha-782425-m02_ha-782425-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m04 "sudo cat /home/docker/cp-test_ha-782425-m02_ha-782425-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp testdata/cp-test.txt ha-782425-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1158605446/001/cp-test_ha-782425-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt ha-782425:/home/docker/cp-test_ha-782425-m03_ha-782425.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425 "sudo cat /home/docker/cp-test_ha-782425-m03_ha-782425.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt ha-782425-m02:/home/docker/cp-test_ha-782425-m03_ha-782425-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m02 "sudo cat /home/docker/cp-test_ha-782425-m03_ha-782425-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425-m03:/home/docker/cp-test.txt ha-782425-m04:/home/docker/cp-test_ha-782425-m03_ha-782425-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m04 "sudo cat /home/docker/cp-test_ha-782425-m03_ha-782425-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp testdata/cp-test.txt ha-782425-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1158605446/001/cp-test_ha-782425-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt ha-782425:/home/docker/cp-test_ha-782425-m04_ha-782425.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425 "sudo cat /home/docker/cp-test_ha-782425-m04_ha-782425.txt"
E0829 18:30:10.128047   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt ha-782425-m02:/home/docker/cp-test_ha-782425-m04_ha-782425-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m02 "sudo cat /home/docker/cp-test_ha-782425-m04_ha-782425-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 cp ha-782425-m04:/home/docker/cp-test.txt ha-782425-m03:/home/docker/cp-test_ha-782425-m04_ha-782425-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 ssh -n ha-782425-m03 "sudo cat /home/docker/cp-test_ha-782425-m04_ha-782425-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0829 18:32:33.493641   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.467148637s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 node delete m03 -v=7 --alsologtostderr
E0829 18:39:49.633747   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:39:49.770256   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-782425 node delete m03 -v=7 --alsologtostderr: (15.604088316s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (358.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-782425 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0829 18:43:26.706541   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:44:49.632449   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:46:12.696916   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-782425 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m57.461584641s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (358.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-782425 --control-plane -v=7 --alsologtostderr
E0829 18:48:26.706194   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-782425 --control-plane -v=7 --alsologtostderr: (1m18.522762153s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-782425 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-444290 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0829 18:49:49.633407   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-444290 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.735475601s)
--- PASS: TestJSONOutput/start/Command (83.74s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-444290 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-444290 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-444290 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-444290 --output=json --user=testUser: (6.617420357s)
--- PASS: TestJSONOutput/stop/Command (6.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-827917 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-827917 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (55.710691ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ed9a2492-7f4f-4412-83df-41aaf6624b57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-827917] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8472180-cd55-4b42-b50b-55063319d205","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19531"}}
	{"specversion":"1.0","id":"34e306dd-e949-481a-bf25-66ae00ab4099","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3bd6787b-aa61-4947-a418-fb9d31e56f92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig"}}
	{"specversion":"1.0","id":"ffb00fb5-1b23-44d7-9a72-57cf61942475","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube"}}
	{"specversion":"1.0","id":"a9d972a5-509e-4ecc-9d5b-19bc187fcda8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c70fbcf2-2022-4783-a75b-b10ab5ade26e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fc4e229e-400d-435b-92d9-740b7d374b97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-827917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-827917
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (83.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-399999 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-399999 --driver=kvm2  --container-runtime=crio: (42.400375403s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-402228 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-402228 --driver=kvm2  --container-runtime=crio: (38.749247357s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-399999
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-402228
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-402228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-402228
helpers_test.go:175: Cleaning up "first-399999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-399999
--- PASS: TestMinikubeProfile (83.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-041847 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-041847 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.99309405s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-041847 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-041847 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-057604 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0829 18:53:26.706636   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-057604 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.38290224s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057604 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057604 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-041847 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057604 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057604 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-057604
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-057604: (1.25985177s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.62s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-057604
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-057604: (21.617378631s)
--- PASS: TestMountStart/serial/RestartStopped (22.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057604 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-057604 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-922931 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0829 18:54:49.632260   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-922931 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.57431457s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-922931 -- rollout status deployment/busybox: (5.072462941s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- exec busybox-7dff88458-9dk5v -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- exec busybox-7dff88458-vkgrg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- exec busybox-7dff88458-9dk5v -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- exec busybox-7dff88458-vkgrg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- exec busybox-7dff88458-9dk5v -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- exec busybox-7dff88458-vkgrg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.46s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- exec busybox-7dff88458-9dk5v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- exec busybox-7dff88458-9dk5v -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- exec busybox-7dff88458-vkgrg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922931 -- exec busybox-7dff88458-vkgrg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-922931 -v 3 --alsologtostderr
E0829 18:56:29.771961   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-922931 -v 3 --alsologtostderr: (46.155155304s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.70s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-922931 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 cp testdata/cp-test.txt multinode-922931:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 cp multinode-922931:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2825152660/001/cp-test_multinode-922931.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 cp multinode-922931:/home/docker/cp-test.txt multinode-922931-m02:/home/docker/cp-test_multinode-922931_multinode-922931-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931-m02 "sudo cat /home/docker/cp-test_multinode-922931_multinode-922931-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 cp multinode-922931:/home/docker/cp-test.txt multinode-922931-m03:/home/docker/cp-test_multinode-922931_multinode-922931-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931-m03 "sudo cat /home/docker/cp-test_multinode-922931_multinode-922931-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 cp testdata/cp-test.txt multinode-922931-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 cp multinode-922931-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2825152660/001/cp-test_multinode-922931-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 cp multinode-922931-m02:/home/docker/cp-test.txt multinode-922931:/home/docker/cp-test_multinode-922931-m02_multinode-922931.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931 "sudo cat /home/docker/cp-test_multinode-922931-m02_multinode-922931.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 cp multinode-922931-m02:/home/docker/cp-test.txt multinode-922931-m03:/home/docker/cp-test_multinode-922931-m02_multinode-922931-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931-m03 "sudo cat /home/docker/cp-test_multinode-922931-m02_multinode-922931-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 cp testdata/cp-test.txt multinode-922931-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 cp multinode-922931-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2825152660/001/cp-test_multinode-922931-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 cp multinode-922931-m03:/home/docker/cp-test.txt multinode-922931:/home/docker/cp-test_multinode-922931-m03_multinode-922931.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931 "sudo cat /home/docker/cp-test_multinode-922931-m03_multinode-922931.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 cp multinode-922931-m03:/home/docker/cp-test.txt multinode-922931-m02:/home/docker/cp-test_multinode-922931-m03_multinode-922931-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 ssh -n multinode-922931-m02 "sudo cat /home/docker/cp-test_multinode-922931-m03_multinode-922931-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-922931 node stop m03: (1.346415214s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-922931 status: exit status 7 (414.983757ms)

                                                
                                                
-- stdout --
	multinode-922931
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-922931-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-922931-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-922931 status --alsologtostderr: exit status 7 (408.689804ms)

                                                
                                                
-- stdout --
	multinode-922931
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-922931-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-922931-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:56:54.857603   49135 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:56:54.857965   49135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:56:54.857981   49135 out.go:358] Setting ErrFile to fd 2...
	I0829 18:56:54.857988   49135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:56:54.858243   49135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 18:56:54.858428   49135 out.go:352] Setting JSON to false
	I0829 18:56:54.858450   49135 mustload.go:65] Loading cluster: multinode-922931
	I0829 18:56:54.858556   49135 notify.go:220] Checking for updates...
	I0829 18:56:54.858836   49135 config.go:182] Loaded profile config "multinode-922931": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 18:56:54.858850   49135 status.go:255] checking status of multinode-922931 ...
	I0829 18:56:54.859355   49135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:54.859395   49135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:54.875229   49135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I0829 18:56:54.875683   49135 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:54.876335   49135 main.go:141] libmachine: Using API Version  1
	I0829 18:56:54.876378   49135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:54.876695   49135 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:54.876873   49135 main.go:141] libmachine: (multinode-922931) Calling .GetState
	I0829 18:56:54.878359   49135 status.go:330] multinode-922931 host status = "Running" (err=<nil>)
	I0829 18:56:54.878377   49135 host.go:66] Checking if "multinode-922931" exists ...
	I0829 18:56:54.878737   49135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:54.878785   49135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:54.894033   49135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34293
	I0829 18:56:54.894442   49135 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:54.894888   49135 main.go:141] libmachine: Using API Version  1
	I0829 18:56:54.894904   49135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:54.895212   49135 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:54.895387   49135 main.go:141] libmachine: (multinode-922931) Calling .GetIP
	I0829 18:56:54.897881   49135 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:56:54.898237   49135 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:56:54.898267   49135 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:56:54.898368   49135 host.go:66] Checking if "multinode-922931" exists ...
	I0829 18:56:54.898660   49135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:54.898692   49135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:54.913561   49135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0829 18:56:54.914025   49135 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:54.914595   49135 main.go:141] libmachine: Using API Version  1
	I0829 18:56:54.914616   49135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:54.914914   49135 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:54.915079   49135 main.go:141] libmachine: (multinode-922931) Calling .DriverName
	I0829 18:56:54.915267   49135 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:56:54.915287   49135 main.go:141] libmachine: (multinode-922931) Calling .GetSSHHostname
	I0829 18:56:54.917702   49135 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:56:54.918226   49135 main.go:141] libmachine: (multinode-922931) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:94:3d", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:54:13 +0000 UTC Type:0 Mac:52:54:00:62:94:3d Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-922931 Clientid:01:52:54:00:62:94:3d}
	I0829 18:56:54.918269   49135 main.go:141] libmachine: (multinode-922931) DBG | domain multinode-922931 has defined IP address 192.168.39.171 and MAC address 52:54:00:62:94:3d in network mk-multinode-922931
	I0829 18:56:54.918401   49135 main.go:141] libmachine: (multinode-922931) Calling .GetSSHPort
	I0829 18:56:54.918676   49135 main.go:141] libmachine: (multinode-922931) Calling .GetSSHKeyPath
	I0829 18:56:54.918835   49135 main.go:141] libmachine: (multinode-922931) Calling .GetSSHUsername
	I0829 18:56:54.919026   49135 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/multinode-922931/id_rsa Username:docker}
	I0829 18:56:55.001221   49135 ssh_runner.go:195] Run: systemctl --version
	I0829 18:56:55.006998   49135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:56:55.021617   49135 kubeconfig.go:125] found "multinode-922931" server: "https://192.168.39.171:8443"
	I0829 18:56:55.021650   49135 api_server.go:166] Checking apiserver status ...
	I0829 18:56:55.021695   49135 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:56:55.035469   49135 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup
	W0829 18:56:55.045142   49135 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0829 18:56:55.045211   49135 ssh_runner.go:195] Run: ls
	I0829 18:56:55.048866   49135 api_server.go:253] Checking apiserver healthz at https://192.168.39.171:8443/healthz ...
	I0829 18:56:55.052845   49135 api_server.go:279] https://192.168.39.171:8443/healthz returned 200:
	ok
	I0829 18:56:55.052865   49135 status.go:422] multinode-922931 apiserver status = Running (err=<nil>)
	I0829 18:56:55.052874   49135 status.go:257] multinode-922931 status: &{Name:multinode-922931 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:56:55.052907   49135 status.go:255] checking status of multinode-922931-m02 ...
	I0829 18:56:55.053198   49135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:55.053233   49135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:55.068246   49135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32945
	I0829 18:56:55.068704   49135 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:55.069214   49135 main.go:141] libmachine: Using API Version  1
	I0829 18:56:55.069233   49135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:55.069524   49135 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:55.069703   49135 main.go:141] libmachine: (multinode-922931-m02) Calling .GetState
	I0829 18:56:55.071242   49135 status.go:330] multinode-922931-m02 host status = "Running" (err=<nil>)
	I0829 18:56:55.071255   49135 host.go:66] Checking if "multinode-922931-m02" exists ...
	I0829 18:56:55.071612   49135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:55.071650   49135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:55.086990   49135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44909
	I0829 18:56:55.087386   49135 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:55.087876   49135 main.go:141] libmachine: Using API Version  1
	I0829 18:56:55.087897   49135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:55.088306   49135 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:55.088528   49135 main.go:141] libmachine: (multinode-922931-m02) Calling .GetIP
	I0829 18:56:55.091142   49135 main.go:141] libmachine: (multinode-922931-m02) DBG | domain multinode-922931-m02 has defined MAC address 52:54:00:fe:02:e7 in network mk-multinode-922931
	I0829 18:56:55.091579   49135 main.go:141] libmachine: (multinode-922931-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:02:e7", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:55:13 +0000 UTC Type:0 Mac:52:54:00:fe:02:e7 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:multinode-922931-m02 Clientid:01:52:54:00:fe:02:e7}
	I0829 18:56:55.091606   49135 main.go:141] libmachine: (multinode-922931-m02) DBG | domain multinode-922931-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:fe:02:e7 in network mk-multinode-922931
	I0829 18:56:55.091698   49135 host.go:66] Checking if "multinode-922931-m02" exists ...
	I0829 18:56:55.092001   49135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:55.092031   49135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:55.106652   49135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44809
	I0829 18:56:55.107054   49135 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:55.107501   49135 main.go:141] libmachine: Using API Version  1
	I0829 18:56:55.107519   49135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:55.107785   49135 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:55.107937   49135 main.go:141] libmachine: (multinode-922931-m02) Calling .DriverName
	I0829 18:56:55.108087   49135 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:56:55.108111   49135 main.go:141] libmachine: (multinode-922931-m02) Calling .GetSSHHostname
	I0829 18:56:55.111003   49135 main.go:141] libmachine: (multinode-922931-m02) DBG | domain multinode-922931-m02 has defined MAC address 52:54:00:fe:02:e7 in network mk-multinode-922931
	I0829 18:56:55.111427   49135 main.go:141] libmachine: (multinode-922931-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:02:e7", ip: ""} in network mk-multinode-922931: {Iface:virbr1 ExpiryTime:2024-08-29 19:55:13 +0000 UTC Type:0 Mac:52:54:00:fe:02:e7 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:multinode-922931-m02 Clientid:01:52:54:00:fe:02:e7}
	I0829 18:56:55.111471   49135 main.go:141] libmachine: (multinode-922931-m02) DBG | domain multinode-922931-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:fe:02:e7 in network mk-multinode-922931
	I0829 18:56:55.111644   49135 main.go:141] libmachine: (multinode-922931-m02) Calling .GetSSHPort
	I0829 18:56:55.111804   49135 main.go:141] libmachine: (multinode-922931-m02) Calling .GetSSHKeyPath
	I0829 18:56:55.111974   49135 main.go:141] libmachine: (multinode-922931-m02) Calling .GetSSHUsername
	I0829 18:56:55.112080   49135 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19531-13056/.minikube/machines/multinode-922931-m02/id_rsa Username:docker}
	I0829 18:56:55.192706   49135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:56:55.205170   49135 status.go:257] multinode-922931-m02 status: &{Name:multinode-922931-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:56:55.205200   49135 status.go:255] checking status of multinode-922931-m03 ...
	I0829 18:56:55.205516   49135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0829 18:56:55.205560   49135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0829 18:56:55.222305   49135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36091
	I0829 18:56:55.222813   49135 main.go:141] libmachine: () Calling .GetVersion
	I0829 18:56:55.223268   49135 main.go:141] libmachine: Using API Version  1
	I0829 18:56:55.223291   49135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0829 18:56:55.223572   49135 main.go:141] libmachine: () Calling .GetMachineName
	I0829 18:56:55.223747   49135 main.go:141] libmachine: (multinode-922931-m03) Calling .GetState
	I0829 18:56:55.225163   49135 status.go:330] multinode-922931-m03 host status = "Stopped" (err=<nil>)
	I0829 18:56:55.225174   49135 status.go:343] host is not running, skipping remaining checks
	I0829 18:56:55.225180   49135 status.go:257] multinode-922931-m03 status: &{Name:multinode-922931-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-922931 node start m03 -v=7 --alsologtostderr: (39.385301769s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-922931 node delete m03: (1.629735816s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-922931 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0829 19:08:26.706938   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-922931 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m58.244877986s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922931 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-922931
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-922931-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-922931-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (58.518574ms)

                                                
                                                
-- stdout --
	* [multinode-922931-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-922931-m02' is duplicated with machine name 'multinode-922931-m02' in profile 'multinode-922931'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-922931-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-922931-m03 --driver=kvm2  --container-runtime=crio: (42.577663847s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-922931
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-922931: exit status 80 (201.691787ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-922931 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-922931-m03 already exists in multinode-922931-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-922931-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.86s)

                                                
                                    
x
+
TestScheduledStopUnix (114.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-993675 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-993675 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.850401515s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-993675 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-993675 -n scheduled-stop-993675
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-993675 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-993675 --cancel-scheduled
E0829 19:14:49.633312   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-993675 -n scheduled-stop-993675
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-993675
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-993675 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-993675
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-993675: exit status 7 (59.712579ms)

                                                
                                                
-- stdout --
	scheduled-stop-993675
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-993675 -n scheduled-stop-993675
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-993675 -n scheduled-stop-993675: exit status 7 (60.862066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-993675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-993675
--- PASS: TestScheduledStopUnix (114.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (196.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.424784852 start -p running-upgrade-273811 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.424784852 start -p running-upgrade-273811 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m1.173191569s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-273811 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-273811 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.794655887s)
helpers_test.go:175: Cleaning up "running-upgrade-273811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-273811
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-273811: (1.767845261s)
--- PASS: TestRunningBinaryUpgrade (196.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-212110 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-212110 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (74.639972ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-212110] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-212110 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-212110 --driver=kvm2  --container-runtime=crio: (1m33.051111884s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-212110 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (128.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.976632942 start -p stopped-upgrade-054401 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.976632942 start -p stopped-upgrade-054401 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m26.415482668s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.976632942 -p stopped-upgrade-054401 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.976632942 -p stopped-upgrade-054401 stop: (1.367595101s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-054401 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-054401 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.342022925s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (128.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-212110 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-212110 --no-kubernetes --driver=kvm2  --container-runtime=crio: (37.565130321s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-212110 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-212110 status -o json: exit status 2 (247.814076ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-212110","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-212110
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-212110: (1.015465809s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-212110 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-212110 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.50275896s)
--- PASS: TestNoKubernetes/serial/Start (29.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-212110 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-212110 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.469259ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (32.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E0829 19:18:26.706627   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/addons-647117/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (18.138575613s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.590117259s)
--- PASS: TestNoKubernetes/serial/ProfileList (32.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-212110
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-212110: (2.36045008s)
--- PASS: TestNoKubernetes/serial/Stop (2.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-212110 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-212110 --driver=kvm2  --container-runtime=crio: (43.08405867s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-054401
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-212110 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-212110 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.436823ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-633326 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-633326 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (98.381396ms)

                                                
                                                
-- stdout --
	* [false-633326] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:19:42.399631   60785 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:19:42.399864   60785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:19:42.399872   60785 out.go:358] Setting ErrFile to fd 2...
	I0829 19:19:42.399876   60785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:19:42.400031   60785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-13056/.minikube/bin
	I0829 19:19:42.400579   60785 out.go:352] Setting JSON to false
	I0829 19:19:42.401452   60785 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7329,"bootTime":1724951853,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:19:42.401508   60785 start.go:139] virtualization: kvm guest
	I0829 19:19:42.403714   60785 out.go:177] * [false-633326] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:19:42.404876   60785 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 19:19:42.404886   60785 notify.go:220] Checking for updates...
	I0829 19:19:42.407113   60785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:19:42.408498   60785 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-13056/kubeconfig
	I0829 19:19:42.409819   60785 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-13056/.minikube
	I0829 19:19:42.411293   60785 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:19:42.412720   60785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:19:42.414474   60785 config.go:182] Loaded profile config "cert-expiration-492436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:19:42.414628   60785 config.go:182] Loaded profile config "force-systemd-flag-523972": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0829 19:19:42.414741   60785 config.go:182] Loaded profile config "kubernetes-upgrade-353455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0829 19:19:42.414854   60785 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:19:42.448845   60785 out.go:177] * Using the kvm2 driver based on user configuration
	I0829 19:19:42.450115   60785 start.go:297] selected driver: kvm2
	I0829 19:19:42.450142   60785 start.go:901] validating driver "kvm2" against <nil>
	I0829 19:19:42.450156   60785 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:19:42.452248   60785 out.go:201] 
	W0829 19:19:42.453626   60785 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0829 19:19:42.454915   60785 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-633326 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-633326

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-633326

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-633326

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-633326

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-633326

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-633326

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-633326

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-633326

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-633326

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-633326

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-633326

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-633326" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-633326" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-633326

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633326"

                                                
                                                
----------------------- debugLogs end: false-633326 [took: 2.683555324s] --------------------------------
helpers_test.go:175: Cleaning up "false-633326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-633326
--- PASS: TestNetworkPlugins/group/false (2.91s)

                                                
                                    
x
+
TestPause/serial/Start (78.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-518621 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-518621 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m18.092675143s)
--- PASS: TestPause/serial/Start (78.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (114.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m54.557180974s)
--- PASS: TestNetworkPlugins/group/auto/Start (114.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m2.274674071s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (102.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m42.132875384s)
--- PASS: TestNetworkPlugins/group/calico/Start (102.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-633326 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-633326 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p24qk" [abcf57cb-7244-4238-830e-3d8b4406b396] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p24qk" [abcf57cb-7244-4238-830e-3d8b4406b396] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005404129s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-633326 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (87.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m27.43218268s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (87.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-4qvmz" [bb9bb327-2d64-4b2d-94d4-774de85786b3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003766214s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-633326 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-633326 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6p8rs" [39de799a-2ad7-4bec-b028-5b9ce3c9d953] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6p8rs" [39de799a-2ad7-4bec-b028-5b9ce3c9d953] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00396932s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-633326 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (58.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (58.288262564s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (58.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (85.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m25.24311803s)
--- PASS: TestNetworkPlugins/group/flannel/Start (85.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bthkr" [f65437de-578c-43f7-8be9-d500a860f9c7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004633387s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-633326 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-633326 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gblcl" [e135f373-7202-4ac6-a36b-1b21d7823e0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gblcl" [e135f373-7202-4ac6-a36b-1b21d7823e0f] Running
E0829 19:24:49.632531   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/functional-024872/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003704897s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-633326 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-633326 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-633326 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-84b92" [d436d44f-f3e8-4c6d-b9ed-f000b1d3e9f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-84b92" [d436d44f-f3e8-4c6d-b9ed-f000b1d3e9f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005120879s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-633326 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (92.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-633326 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m32.081421244s)
--- PASS: TestNetworkPlugins/group/bridge/Start (92.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-633326 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-633326 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qj7jj" [8786cc1e-57fb-4818-bcb3-0fda558e1430] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qj7jj" [8786cc1e-57fb-4818-bcb3-0fda558e1430] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004115641s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-633326 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (91.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-690795 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-690795 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m31.606327403s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (91.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kzwbx" [ea27df72-e983-4c4b-ad23-31c32396f070] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004437676s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-633326 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-633326 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-874zt" [4063e983-bef4-4646-b3a2-330fde9b9710] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-874zt" [4063e983-bef4-4646-b3a2-330fde9b9710] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004356234s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-633326 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (64.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-920571 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-920571 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m4.24129125s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (64.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-633326 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-633326 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4srb6" [f78822ce-2c51-4458-a552-ec7e3a4726d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4srb6" [f78822ce-2c51-4458-a552-ec7e3a4726d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004809372s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-633326 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-633326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0829 19:55:49.950815   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-672127 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-672127 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m26.15393598s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-690795 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [efc5b6a8-65d0-4568-bb46-15e0694be475] Pending
helpers_test.go:344: "busybox" [efc5b6a8-65d0-4568-bb46-15e0694be475] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [efc5b6a8-65d0-4568-bb46-15e0694be475] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003390528s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-690795 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-690795 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-690795 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-920571 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [733f6096-ed09-4e5e-b2c8-b98cb17a8661] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [733f6096-ed09-4e5e-b2c8-b98cb17a8661] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003709992s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-920571 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-920571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-920571 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-672127 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b91f0918-0272-41b0-a557-a62ac2209e94] Pending
helpers_test.go:344: "busybox" [b91f0918-0272-41b0-a557-a62ac2209e94] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b91f0918-0272-41b0-a557-a62ac2209e94] Running
E0829 19:28:44.927225   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/auto-633326/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004138157s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-672127 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-672127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-672127 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (672.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-690795 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-690795 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (11m11.780079135s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-690795 -n no-preload-690795
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (672.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (595.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-920571 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-920571 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m54.80289028s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-920571 -n embed-certs-920571
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (595.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (543.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-672127 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0829 19:31:18.939858   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/custom-flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:30.926408   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/flannel-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:33.902030   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/kindnet-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:34.414215   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/enable-default-cni-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:41.164307   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:41.170771   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:41.182205   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:41.203642   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:41.245101   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:41.326517   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:41.488118   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:41.809826   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:42.451907   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:43.733820   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:46.296070   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:31:51.417900   20259 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-13056/.minikube/profiles/bridge-633326/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-672127 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m3.704264105s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-672127 -n default-k8s-diff-port-672127
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (543.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-467349 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-467349 --alsologtostderr -v=3: (2.28122106s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467349 -n old-k8s-version-467349: exit status 7 (63.883806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-467349 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-371258 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-371258 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (43.575060842s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-371258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-371258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.105498823s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-371258 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-371258 --alsologtostderr -v=3: (7.330202866s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-371258 -n newest-cni-371258
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-371258 -n newest-cni-371258: exit status 7 (63.912334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-371258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-371258 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-371258 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (35.492467922s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-371258 -n newest-cni-371258
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-371258 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-371258 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-371258 -n newest-cni-371258
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-371258 -n newest-cni-371258: exit status 2 (224.242867ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-371258 -n newest-cni-371258
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-371258 -n newest-cni-371258: exit status 2 (226.251979ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-371258 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-371258 -n newest-cni-371258
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-371258 -n newest-cni-371258
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.28s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
139 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
140 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
141 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
264 TestNetworkPlugins/group/kubenet 2.96
273 TestNetworkPlugins/group/cilium 3.11
279 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-633326 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-633326

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-633326

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-633326

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-633326

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-633326

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-633326

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-633326

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-633326

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-633326

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-633326

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-633326

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-633326" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-633326" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-633326

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633326"

                                                
                                                
----------------------- debugLogs end: kubenet-633326 [took: 2.81484356s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-633326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-633326
--- SKIP: TestNetworkPlugins/group/kubenet (2.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-633326 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-633326" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-633326

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-633326" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633326"

                                                
                                                
----------------------- debugLogs end: cilium-633326 [took: 2.960993957s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-633326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-633326
--- SKIP: TestNetworkPlugins/group/cilium (3.11s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-831934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-831934
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard